Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630676965 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes Sep 3 13:49:27.667: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.672: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 3 13:49:27.699: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 13:49:27.749: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 13:49:27.749: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 3 13:49:27.749: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 3 13:49:27.762: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Sep 3 13:49:27.762: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 3 13:49:27.762: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 3 13:49:27.762: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Sep 3 13:49:27.762: INFO: e2e test version: v1.19.11 Sep 3 13:49:27.764: INFO: kube-apiserver version: v1.19.11 Sep 3 13:49:27.764: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.770: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.768: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.789: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.768: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.789: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.781: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.804: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.781: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.806: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.795: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.814: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.801: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.819: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.805: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.821: INFO: Cluster IP family: ipv4 Sep 3 13:49:27.811: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.828: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Sep 3 13:49:27.882: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:27.902: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:28.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Sep 3 13:49:28.074: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:28.079: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-170dd60a-c2e4-4c2e-be21-127402d05c33 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:28.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6002" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":1,"skipped":102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:28.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 3 13:49:28.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2110 create -f -' Sep 3 13:49:28.549: INFO: stderr: "" Sep 3 13:49:28.549: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 3 13:49:29.553: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:49:29.553: INFO: Found 1 / 1 Sep 3 13:49:29.553: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 3 13:49:29.556: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:49:29.556: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 3 13:49:29.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2110 patch pod agnhost-primary-8szfb -p {"metadata":{"annotations":{"x":"y"}}}' Sep 3 13:49:29.688: INFO: stderr: "" Sep 3 13:49:29.688: INFO: stdout: "pod/agnhost-primary-8szfb patched\n" STEP: checking annotations Sep 3 13:49:29.702: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:49:29.702: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:29.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2110" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":2,"skipped":167,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Sep 3 13:49:27.821: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.827: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-9d40db87-f975-46f9-935e-3ede7d62fc99 STEP: Creating a pod to test consume configMaps Sep 3 13:49:27.837: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7" in namespace "projected-6174" to be "Succeeded or Failed" Sep 3 13:49:27.839: INFO: Pod "pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.529472ms Sep 3 13:49:29.842: INFO: Pod "pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005549018s STEP: Saw pod success Sep 3 13:49:29.842: INFO: Pod "pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7" satisfied condition "Succeeded or Failed" Sep 3 13:49:29.849: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7 container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:49:31.322: INFO: Waiting for pod pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7 to disappear Sep 3 13:49:31.417: INFO: Pod pod-projected-configmaps-7f4fa974-83cc-4bfb-857f-7daa776982d7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:31.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6174" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Sep 3 13:49:27.828: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.831: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:49:27.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7" in namespace "downward-api-1835" to be "Succeeded or Failed" Sep 3 13:49:27.842: INFO: Pod "downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540056ms Sep 3 13:49:29.850: INFO: Pod "downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010137256s Sep 3 13:49:31.924: INFO: Pod "downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084460102s STEP: Saw pod success Sep 3 13:49:31.924: INFO: Pod "downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7" satisfied condition "Succeeded or Failed" Sep 3 13:49:31.926: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7 container client-container: STEP: delete the pod Sep 3 13:49:32.031: INFO: Waiting for pod downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7 to disappear Sep 3 13:49:32.034: INFO: Pod downwardapi-volume-cc8bf990-a85d-4c84-baee-9c96161080d7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:32.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1835" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:32.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 3 13:49:32.106: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:32.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-364" for this suite. • ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container Sep 3 13:49:27.825: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.829: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 3 13:49:27.832: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:33.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5085" for this suite. • [SLOW TEST:5.254 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:29.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 3 13:49:29.755: INFO: Waiting up to 5m0s for pod "pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11" in namespace "emptydir-3361" to be "Succeeded or Failed" Sep 3 13:49:29.758: INFO: Pod "pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197454ms Sep 3 13:49:31.818: INFO: Pod "pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062570166s Sep 3 13:49:33.822: INFO: Pod "pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06622464s STEP: Saw pod success Sep 3 13:49:33.822: INFO: Pod "pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11" satisfied condition "Succeeded or Failed" Sep 3 13:49:33.825: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11 container test-container: STEP: delete the pod Sep 3 13:49:33.837: INFO: Waiting for pod pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11 to disappear Sep 3 13:49:33.840: INFO: Pod pod-556f7e0d-9bdf-4ca4-9470-398daa4c1f11 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:33.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3361" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":173,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Sep 3 13:49:27.858: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.860: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:27.863: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:34.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6648" for this suite. • [SLOW TEST:6.863 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook Sep 3 13:49:27.822: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.827: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 3 13:49:28.343: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 3 13:49:31.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:49:34.332: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:34.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:35.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8181" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.628 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:31.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication Sep 3 13:49:32.310: INFO: role binding crd-conversion-webhook-auth-reader already exists STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 3 13:49:32.324: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:49:35.351: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:35.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:36.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2246" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:33.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:37.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2636" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":2,"skipped":75,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:37.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:37.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2421" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":3,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:37.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-813442fe-a745-4829-93b4-85e4a5f361f8 STEP: Creating a pod to test consume configMaps Sep 3 13:49:37.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013" in namespace "configmap-4812" to be "Succeeded or Failed" Sep 3 13:49:37.366: INFO: Pod "pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564448ms Sep 3 13:49:39.369: INFO: Pod "pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006269776s STEP: Saw pod success Sep 3 13:49:39.369: INFO: Pod "pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013" satisfied condition "Succeeded or Failed" Sep 3 13:49:39.372: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013 container configmap-volume-test: STEP: delete the pod Sep 3 13:49:39.402: INFO: Waiting for pod pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013 to disappear Sep 3 13:49:39.405: INFO: Pod pod-configmaps-87a50977-d9b9-4355-ad41-c6bc0832a013 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:39.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4812" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:35.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 3 13:49:35.464: INFO: Waiting up to 5m0s for pod "downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a" in namespace "downward-api-6202" to be "Succeeded or Failed" Sep 3 13:49:35.466: INFO: Pod "downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432019ms Sep 3 13:49:37.469: INFO: Pod "downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005540781s Sep 3 13:49:39.473: INFO: Pod "downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008959334s STEP: Saw pod success Sep 3 13:49:39.473: INFO: Pod "downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a" satisfied condition "Succeeded or Failed" Sep 3 13:49:39.476: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a container dapi-container: STEP: delete the pod Sep 3 13:49:39.491: INFO: Waiting for pod downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a to disappear Sep 3 13:49:39.494: INFO: Pod downward-api-792a3ff6-5c1e-46b6-a2d8-32581781e09a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:39.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6202" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:34.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:34.739: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 3 13:49:39.742: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 3 13:49:39.742: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 13:49:39.758: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4725 /apis/apps/v1/namespaces/deployment-4725/deployments/test-cleanup-deployment ce0513cc-579e-4d0e-a791-8d087656b464 1045143 1 2021-09-03 13:49:39 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-09-03 13:49:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0015164a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Sep 3 13:49:39.760: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Sep 3 13:49:39.760: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Sep 3 13:49:39.761: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4725 /apis/apps/v1/namespaces/deployment-4725/replicasets/test-cleanup-controller f5d13efb-abc4-49ad-952a-2176648faf78 1045144 1 2021-09-03 13:49:34 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ce0513cc-579e-4d0e-a791-8d087656b464 0xc001516877 0xc001516878}] [] [{e2e.test Update apps/v1 2021-09-03 13:49:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:49:39 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"ce0513cc-579e-4d0e-a791-8d087656b464\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001516918 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:49:39.764: INFO: Pod "test-cleanup-controller-fqv44" is available: &Pod{ObjectMeta:{test-cleanup-controller-fqv44 test-cleanup-controller- deployment-4725 /api/v1/namespaces/deployment-4725/pods/test-cleanup-controller-fqv44 88506877-8da5-4d06-95d5-a2f0f9e6207a 1045070 0 2021-09-03 13:49:34 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller f5d13efb-abc4-49ad-952a-2176648faf78 0xc001516c97 0xc001516c98}] [] [{kube-controller-manager Update v1 2021-09-03 13:49:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5d13efb-abc4-49ad-952a-2176648faf78\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:49:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bpk9x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bpk9x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bpk9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:49:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:49:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:49:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.233,StartTime:2021-09-03 13:49:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:49:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c2f622695e8db710367f08103f13121036be12a6d41fc6bd203d61959fe6734b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:39.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4725" for this suite. • [SLOW TEST:5.068 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:33.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:49:33.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1" in namespace "downward-api-7375" to be "Succeeded or Failed" Sep 3 13:49:33.893: INFO: Pod "downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.735792ms Sep 3 13:49:35.896: INFO: Pod "downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006254043s Sep 3 13:49:37.900: INFO: Pod "downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00978872s Sep 3 13:49:39.903: INFO: Pod "downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012808858s STEP: Saw pod success Sep 3 13:49:39.903: INFO: Pod "downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1" satisfied condition "Succeeded or Failed" Sep 3 13:49:39.906: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1 container client-container: STEP: delete the pod Sep 3 13:49:39.919: INFO: Waiting for pod downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1 to disappear Sep 3 13:49:39.921: INFO: Pod downwardapi-volume-05b77736-12b9-4d1d-8122-fe2f47c792f1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:39.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7375" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":178,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:36.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 3 13:49:36.667: INFO: Waiting up to 5m0s for pod "pod-b2383801-2b2f-4efd-a32b-27e53e2168b0" in namespace "emptydir-8398" to be "Succeeded or Failed" Sep 3 13:49:36.669: INFO: Pod "pod-b2383801-2b2f-4efd-a32b-27e53e2168b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2265ms Sep 3 13:49:38.673: INFO: Pod "pod-b2383801-2b2f-4efd-a32b-27e53e2168b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00582034s Sep 3 13:49:40.676: INFO: Pod "pod-b2383801-2b2f-4efd-a32b-27e53e2168b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009151696s STEP: Saw pod success Sep 3 13:49:40.676: INFO: Pod "pod-b2383801-2b2f-4efd-a32b-27e53e2168b0" satisfied condition "Succeeded or Failed" Sep 3 13:49:40.679: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-b2383801-2b2f-4efd-a32b-27e53e2168b0 container test-container: STEP: delete the pod Sep 3 13:49:40.692: INFO: Waiting for pod pod-b2383801-2b2f-4efd-a32b-27e53e2168b0 to disappear Sep 3 13:49:40.694: INFO: Pod pod-b2383801-2b2f-4efd-a32b-27e53e2168b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:40.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8398" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi Sep 3 13:49:27.934: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.937: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:27.941: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 3 13:49:33.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 create -f -' Sep 3 13:49:34.243: INFO: stderr: "" Sep 3 13:49:34.243: INFO: stdout: "e2e-test-crd-publish-openapi-5169-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 3 13:49:34.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 delete e2e-test-crd-publish-openapi-5169-crds test-cr' Sep 3 13:49:34.363: INFO: stderr: "" Sep 3 13:49:34.363: INFO: stdout: "e2e-test-crd-publish-openapi-5169-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 3 13:49:34.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 apply -f -' Sep 3 13:49:34.723: INFO: stderr: "" Sep 3 13:49:34.723: INFO: stdout: "e2e-test-crd-publish-openapi-5169-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 3 13:49:34.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 --namespace=crd-publish-openapi-1180 delete e2e-test-crd-publish-openapi-5169-crds test-cr' Sep 3 13:49:34.855: INFO: stderr: "" Sep 3 13:49:34.855: INFO: stdout: "e2e-test-crd-publish-openapi-5169-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 3 13:49:34.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1180 explain e2e-test-crd-publish-openapi-5169-crds' Sep 3 13:49:35.126: INFO: stderr: "" Sep 3 13:49:35.126: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5169-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:41.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1180" for this suite. • [SLOW TEST:13.227 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":1,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:39.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-48dd42e0-3172-4783-a640-5ee50f294cf9 STEP: Creating a pod to test consume configMaps Sep 3 13:49:39.558: INFO: Waiting up to 5m0s for pod "pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f" in namespace "configmap-6168" to be "Succeeded or Failed" Sep 3 13:49:39.560: INFO: Pod "pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429855ms Sep 3 13:49:41.564: INFO: Pod "pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005877162s STEP: Saw pod success Sep 3 13:49:41.564: INFO: Pod "pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f" satisfied condition "Succeeded or Failed" Sep 3 13:49:41.567: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f container configmap-volume-test: STEP: delete the pod Sep 3 13:49:41.583: INFO: Waiting for pod pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f to disappear Sep 3 13:49:41.585: INFO: Pod pod-configmaps-db05ad40-3950-4add-a892-9fa51d9d181f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:41.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6168" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:39.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-614bb3c0-e1c4-4129-b838-0d9bd5ad460a STEP: Creating a pod to test consume configMaps Sep 3 13:49:39.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12" in namespace "configmap-4979" to be "Succeeded or Failed" Sep 3 13:49:39.866: INFO: Pod "pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127129ms Sep 3 13:49:41.868: INFO: Pod "pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004966604s STEP: Saw pod success Sep 3 13:49:41.868: INFO: Pod "pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12" satisfied condition "Succeeded or Failed" Sep 3 13:49:41.871: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12 container configmap-volume-test: STEP: delete the pod Sep 3 13:49:41.884: INFO: Waiting for pod pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12 to disappear Sep 3 13:49:41.886: INFO: Pod pod-configmaps-3ec36f98-8074-4256-870b-f429ea285f12 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:41.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4979" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:32.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:32.156: INFO: Creating ReplicaSet my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c Sep 3 13:49:32.163: INFO: Pod name my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c: Found 0 pods out of 1 Sep 3 13:49:37.166: INFO: Pod name my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c: Found 1 pods out of 1 Sep 3 13:49:37.166: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c" is running Sep 3 13:49:37.168: INFO: Pod "my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c-rv9k2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:49:32 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:49:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:49:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:49:32 +0000 UTC Reason: Message:}]) Sep 3 13:49:37.170: INFO: Trying to dial the pod Sep 3 13:49:42.182: INFO: Controller my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c: Got expected result from replica 1 [my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c-rv9k2]: "my-hostname-basic-778114cb-f4b9-4aa8-aa40-9c3c1b91424c-rv9k2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:42.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8654" for this suite. • [SLOW TEST:10.060 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Sep 3 13:49:27.886: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.894: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1546 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 3 13:49:27.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8073 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Sep 3 13:49:28.089: INFO: stderr: "" Sep 3 13:49:28.089: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 3 13:49:33.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8073 get pod e2e-test-httpd-pod -o json' Sep 3 13:49:33.264: INFO: stderr: "" Sep 3 13:49:33.264: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-09-03T13:49:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-09-03T13:49:28Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"192.168.1.85\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-09-03T13:49:29Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8073\",\n \"resourceVersion\": \"1044705\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8073/pods/e2e-test-httpd-pod\",\n \"uid\": \"10110cba-428a-42c5-b92f-aa6ea3e783ad\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sv6bx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"capi-kali-md-0-76b6798f7f-7jvhm\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sv6bx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sv6bx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-09-03T13:49:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-09-03T13:49:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-09-03T13:49:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-09-03T13:49:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://97eee98c26bed24819db54ad7e21e7f4897cbfd444f63cb2a15b74bf5faf7839\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-09-03T13:49:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.1.85\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.1.85\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-09-03T13:49:28Z\"\n }\n}\n" STEP: replace the image in the pod Sep 3 13:49:33.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8073 replace -f -' Sep 3 13:49:33.604: INFO: stderr: "" Sep 3 13:49:33.604: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 3 13:49:33.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8073 delete pods e2e-test-httpd-pod' Sep 3 13:49:43.664: INFO: stderr: "" Sep 3 13:49:43.664: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:43.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8073" for this suite. • [SLOW TEST:15.805 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":1,"skipped":40,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:41.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:49:41.926: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 3 13:49:43.955: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:44.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8020" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:41.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1512 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 3 13:49:41.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5027 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Sep 3 13:49:41.356: INFO: stderr: "" Sep 3 13:49:41.357: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 Sep 3 13:49:41.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5027 delete pods e2e-test-httpd-pod' Sep 3 13:49:46.204: INFO: stderr: "" Sep 3 13:49:46.204: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:46.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5027" for this suite. • [SLOW TEST:5.035 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1509 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":2,"skipped":87,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Sep 3 13:49:27.869: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.873: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:49:28.363: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:49:31.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273768, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:49:34.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:46.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4923" for this suite. STEP: Destroying namespace "webhook-4923-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.736 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:43.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-a06bc8e2-c5f4-4e68-9cc4-84c88d34555d STEP: Creating a pod to test consume secrets Sep 3 13:49:43.714: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90" in namespace "projected-5586" to be "Succeeded or Failed" Sep 3 13:49:43.716: INFO: Pod "pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90": Phase="Pending", Reason="", readiness=false. Elapsed: 1.848828ms Sep 3 13:49:45.719: INFO: Pod "pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004437696s Sep 3 13:49:47.723: INFO: Pod "pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008339777s Sep 3 13:49:49.726: INFO: Pod "pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011806177s STEP: Saw pod success Sep 3 13:49:49.726: INFO: Pod "pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90" satisfied condition "Succeeded or Failed" Sep 3 13:49:49.729: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90 container projected-secret-volume-test: STEP: delete the pod Sep 3 13:49:49.741: INFO: Waiting for pod pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90 to disappear Sep 3 13:49:49.744: INFO: Pod pod-projected-secrets-19a912a9-f994-42a4-9581-b5bfcc9ffc90 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:49.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5586" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":44,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:46.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 3 13:49:46.257: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 3 13:49:51.260: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:52.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5299" for this suite. • [SLOW TEST:6.054 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:39.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:53.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4861" for this suite. • [SLOW TEST:14.046 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":5,"skipped":187,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:42.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 3 13:49:42.262: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1531" for this suite. • [SLOW TEST:11.893 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:49.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:49:50.246: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:49:52.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:49:54.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273790, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:49:57.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:57.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7828" for this suite. STEP: Destroying namespace "webhook-7828-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.729 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:57.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 3 13:49:57.566: INFO: Created pod &Pod{ObjectMeta:{dns-3945 dns-3945 /api/v1/namespaces/dns-3945/pods/dns-3945 6507b43d-3e7c-45f3-94d9-737a73fb9d00 1045983 0 2021-09-03 13:49:57 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-09-03 13:49:57 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gh52x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gh52x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gh52x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:49:57.569: INFO: The status of Pod dns-3945 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:49:59.572: INFO: The status of Pod dns-3945 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 3 13:49:59.572: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3945 PodName:dns-3945 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:49:59.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Sep 3 13:49:59.693: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3945 PodName:dns-3945 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:49:59.693: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:59.787: INFO: Deleting pod dns-3945... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:49:59.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3945" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":4,"skipped":74,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:54.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3145.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3145.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:50:00.567: INFO: DNS probes using dns-3145/dns-test-56ee29a1-e411-4606-b81e-462c78e82574 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:00.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3145" for this suite. • [SLOW TEST:6.571 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":6,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:41.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 3 13:49:41.621: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:49:46.264: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:03.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9355" for this suite. • [SLOW TEST:21.529 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:40.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 3 13:49:40.758: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:08.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5473" for this suite. • [SLOW TEST:27.605 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:00.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb Sep 3 13:50:00.656: INFO: Pod name my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb: Found 0 pods out of 1 Sep 3 13:50:05.660: INFO: Pod name my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb: Found 1 pods out of 1 Sep 3 13:50:05.660: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb" are running Sep 3 13:50:05.662: INFO: Pod "my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb-v4dt6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:50:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:50:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:50:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-03 13:50:00 +0000 UTC Reason: Message:}]) Sep 3 13:50:05.663: INFO: Trying to dial the pod Sep 3 13:50:10.674: INFO: Controller my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb: Got expected result from replica 1 [my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb-v4dt6]: "my-hostname-basic-9a374b6b-c2e9-4e58-a823-cd51ee9d3feb-v4dt6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:10.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4465" for this suite. • [SLOW TEST:10.064 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":7,"skipped":219,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:10.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9412 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9412 to expose endpoints map[] Sep 3 13:50:10.747: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Sep 3 13:50:11.755: INFO: successfully validated that service multi-endpoint-test in namespace services-9412 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9412 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9412 to expose endpoints map[pod1:[100]] Sep 3 13:50:13.780: INFO: successfully validated that service multi-endpoint-test in namespace services-9412 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9412 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9412 to expose endpoints map[pod1:[100] pod2:[101]] Sep 3 13:50:15.799: INFO: successfully validated that service multi-endpoint-test in namespace services-9412 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9412 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9412 to expose endpoints map[pod2:[101]] Sep 3 13:50:15.816: INFO: successfully validated that service multi-endpoint-test in namespace services-9412 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9412 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9412 to expose endpoints map[] Sep 3 13:50:15.828: INFO: successfully validated that service multi-endpoint-test in namespace services-9412 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:15.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9412" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:5.167 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":8,"skipped":220,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:15.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-8e810649-03e4-4b85-9e3a-8020454ddc92 STEP: Creating configMap with name cm-test-opt-upd-c91b10d3-e6a4-47a6-b933-ba411b1a05f3 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8e810649-03e4-4b85-9e3a-8020454ddc92 STEP: Updating configmap cm-test-opt-upd-c91b10d3-e6a4-47a6-b933-ba411b1a05f3 STEP: Creating configMap with name cm-test-opt-create-9b878645-368d-4b98-91ce-6680aa2babfc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:21.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-653" for this suite. • [SLOW TEST:6.122 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":232,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:22.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 3 13:50:22.062: INFO: Waiting up to 5m0s for pod "client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca" in namespace "containers-5365" to be "Succeeded or Failed" Sep 3 13:50:22.065: INFO: Pod "client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.157645ms Sep 3 13:50:24.069: INFO: Pod "client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007497557s STEP: Saw pod success Sep 3 13:50:24.069: INFO: Pod "client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca" satisfied condition "Succeeded or Failed" Sep 3 13:50:24.073: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca container test-container: STEP: delete the pod Sep 3 13:50:24.087: INFO: Waiting for pod client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca to disappear Sep 3 13:50:24.090: INFO: Pod client-containers-c42e521a-82a1-420b-93b0-481a9f2cd9ca no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:24.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5365" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":243,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:24.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 3 13:50:24.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6291 /api/v1/namespaces/watch-6291/configmaps/e2e-watch-test-resource-version b1076ad7-aef4-487a-a712-190451b3e630 1046420 0 2021-09-03 13:50:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:50:24.169: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6291 /api/v1/namespaces/watch-6291/configmaps/e2e-watch-test-resource-version b1076ad7-aef4-487a-a712-190451b3e630 1046421 0 2021-09-03 13:50:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:24.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6291" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":11,"skipped":251,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:24.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 3 13:50:24.235: INFO: Waiting up to 5m0s for pod "downward-api-4640751b-58e3-4546-ac12-621d7990d03c" in namespace "downward-api-8077" to be "Succeeded or Failed" Sep 3 13:50:24.238: INFO: Pod "downward-api-4640751b-58e3-4546-ac12-621d7990d03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51314ms Sep 3 13:50:26.242: INFO: Pod "downward-api-4640751b-58e3-4546-ac12-621d7990d03c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006638305s STEP: Saw pod success Sep 3 13:50:26.242: INFO: Pod "downward-api-4640751b-58e3-4546-ac12-621d7990d03c" satisfied condition "Succeeded or Failed" Sep 3 13:50:26.244: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod downward-api-4640751b-58e3-4546-ac12-621d7990d03c container dapi-container: STEP: delete the pod Sep 3 13:50:26.258: INFO: Waiting for pod downward-api-4640751b-58e3-4546-ac12-621d7990d03c to disappear Sep 3 13:50:26.261: INFO: Pod downward-api-4640751b-58e3-4546-ac12-621d7990d03c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:26.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8077" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:08.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 3 13:50:08.442: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:29.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-825" for this suite. • [SLOW TEST:21.136 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:29.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-07830c72-2e23-4a61-85b5-a55911553572 STEP: Creating a pod to test consume secrets Sep 3 13:50:29.619: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f" in namespace "projected-2121" to be "Succeeded or Failed" Sep 3 13:50:29.621: INFO: Pod "pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688511ms Sep 3 13:50:31.915: INFO: Pod "pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.296602168s STEP: Saw pod success Sep 3 13:50:31.915: INFO: Pod "pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f" satisfied condition "Succeeded or Failed" Sep 3 13:50:31.918: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f container projected-secret-volume-test: STEP: delete the pod Sep 3 13:50:32.019: INFO: Waiting for pod pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f to disappear Sep 3 13:50:32.022: INFO: Pod pod-projected-secrets-7df63a16-b657-4694-b4ed-5f3e24d28f5f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:32.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2121" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":96,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:52.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-938, will wait for the garbage collector to delete the pods Sep 3 13:49:56.621: INFO: Deleting Job.batch foo took: 151.769122ms Sep 3 13:49:56.721: INFO: Terminating Job.batch foo pods took: 100.266838ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:33.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-938" for this suite. • [SLOW TEST:41.877 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":4,"skipped":141,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:32.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5591/configmap-test-6576e502-04a3-41c9-9e8a-448fa67cbdf4 STEP: Creating a pod to test consume configMaps Sep 3 13:50:32.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df" in namespace "configmap-5591" to be "Succeeded or Failed" Sep 3 13:50:32.078: INFO: Pod "pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308582ms Sep 3 13:50:34.219: INFO: Pod "pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.143146998s STEP: Saw pod success Sep 3 13:50:34.219: INFO: Pod "pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df" satisfied condition "Succeeded or Failed" Sep 3 13:50:34.222: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df container env-test: STEP: delete the pod Sep 3 13:50:34.621: INFO: Waiting for pod pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df to disappear Sep 3 13:50:34.623: INFO: Pod pod-configmaps-3ff3d216-a9f5-4b25-b4b6-37cb1976f5df no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:34.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5591" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:35.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-9d12e61f-30e7-47bb-998f-da5f389eabd3 STEP: Creating a pod to test consume secrets Sep 3 13:50:35.751: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c" in namespace "projected-1514" to be "Succeeded or Failed" Sep 3 13:50:35.753: INFO: Pod "pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.824187ms Sep 3 13:50:37.818: INFO: Pod "pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066321804s Sep 3 13:50:39.823: INFO: Pod "pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07122782s STEP: Saw pod success Sep 3 13:50:39.823: INFO: Pod "pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c" satisfied condition "Succeeded or Failed" Sep 3 13:50:39.826: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c container projected-secret-volume-test: STEP: delete the pod Sep 3 13:50:39.841: INFO: Waiting for pod pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c to disappear Sep 3 13:50:39.844: INFO: Pod pod-projected-secrets-ffb4639d-9910-407a-add1-92a4fd428e6c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:39.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1514" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":166,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:34.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 3 13:50:35.232: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 3 13:50:35.996: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 3 13:50:38.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273836, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273836, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273836, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273835, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:50:41.070: INFO: Waited 924.137387ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:41.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4052" for this suite. • [SLOW TEST:7.472 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":5,"skipped":151,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:41.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 3 13:50:41.863: INFO: Waiting up to 5m0s for pod "downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b" in namespace "downward-api-5099" to be "Succeeded or Failed" Sep 3 13:50:41.865: INFO: Pod "downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506411ms Sep 3 13:50:43.869: INFO: Pod "downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006742336s STEP: Saw pod success Sep 3 13:50:43.869: INFO: Pod "downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b" satisfied condition "Succeeded or Failed" Sep 3 13:50:43.873: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b container dapi-container: STEP: delete the pod Sep 3 13:50:43.888: INFO: Waiting for pod downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b to disappear Sep 3 13:50:43.891: INFO: Pod downward-api-5e8b492d-13be-41fb-82bd-e7c7821c3b2b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:43.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5099" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":162,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:39.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 3 13:50:42.936: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:43.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9476" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":179,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:43.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:50:44.490: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:50:46.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273844, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273844, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273844, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273844, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:50:49.540: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 3 13:50:49.564: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:49.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9975" for this suite. STEP: Destroying namespace "webhook-9975-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.644 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:54.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0903 13:49:55.628754 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 3 13:50:57.646: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:50:57.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2785" for this suite. • [SLOW TEST:63.502 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:27.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Sep 3 13:49:27.852: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 13:49:27.856: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-2b90819d-34fd-480f-b4c9-7c04c2cac642 STEP: Creating secret with name s-test-opt-upd-f8a9648c-8c84-412f-bd6e-64759798ac08 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2b90819d-34fd-480f-b4c9-7c04c2cac642 STEP: Updating secret s-test-opt-upd-f8a9648c-8c84-412f-bd6e-64759798ac08 STEP: Creating secret with name s-test-opt-create-096f9358-ac64-429b-809b-69c2378afcc9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:03.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3953" for this suite. • [SLOW TEST:95.442 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:03.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8081.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8081.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8081.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8081.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8081.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8081.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:51:05.420: INFO: DNS probes using dns-8081/dns-test-3cddfc13-5512-41d7-9d98-1ab14b646bb8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:05.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8081" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":43,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:43.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2583 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 3 13:50:43.938: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 3 13:50:43.958: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:50:46.018: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:50:47.962: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:50:49.963: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:50:51.962: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:50:53.962: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:50:55.962: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:50:57.962: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 3 13:50:57.968: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 3 13:50:59.973: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 3 13:51:01.972: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 3 13:51:04.005: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2583 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:51:04.005: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:51:05.142: INFO: Found all expected endpoints: [netserver-0] Sep 3 13:51:05.146: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.119 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2583 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:51:05.146: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:51:06.266: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:06.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2583" for this suite. • [SLOW TEST:22.367 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:05.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:05.524: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:06.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8436" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":3,"skipped":74,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:06.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:06.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2819" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":4,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:06.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:06.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7808" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":5,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:06.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:07.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5133 version' Sep 3 13:51:07.115: INFO: stderr: "" Sep 3 13:51:07.115: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.11\", GitCommit:\"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T12:27:07Z\", GoVersion:\"go1.15.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.11\", GitCommit:\"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33\", GitTreeState:\"clean\", BuildDate:\"2021-05-18T09:41:02Z\", GoVersion:\"go1.15.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:07.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5133" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":6,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:06.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 3 13:51:06.349: INFO: Waiting up to 5m0s for pod "pod-7edbbacf-3d27-4764-a584-838d8a0c28d8" in namespace "emptydir-9605" to be "Succeeded or Failed" Sep 3 13:51:06.352: INFO: Pod "pod-7edbbacf-3d27-4764-a584-838d8a0c28d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.900049ms Sep 3 13:51:08.355: INFO: Pod "pod-7edbbacf-3d27-4764-a584-838d8a0c28d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005983325s STEP: Saw pod success Sep 3 13:51:08.355: INFO: Pod "pod-7edbbacf-3d27-4764-a584-838d8a0c28d8" satisfied condition "Succeeded or Failed" Sep 3 13:51:08.358: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-7edbbacf-3d27-4764-a584-838d8a0c28d8 container test-container: STEP: delete the pod Sep 3 13:51:08.373: INFO: Waiting for pod pod-7edbbacf-3d27-4764-a584-838d8a0c28d8 to disappear Sep 3 13:51:08.376: INFO: Pod pod-7edbbacf-3d27-4764-a584-838d8a0c28d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:08.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9605" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:07.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-16cc455a-dad7-4441-bed7-9eaeb45e9bd0 STEP: Creating a pod to test consume configMaps Sep 3 13:51:07.269: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45" in namespace "projected-3447" to be "Succeeded or Failed" Sep 3 13:51:07.272: INFO: Pod "pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.81493ms Sep 3 13:51:09.276: INFO: Pod "pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007236292s STEP: Saw pod success Sep 3 13:51:09.276: INFO: Pod "pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45" satisfied condition "Succeeded or Failed" Sep 3 13:51:09.279: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45 container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:51:09.292: INFO: Waiting for pod pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45 to disappear Sep 3 13:51:09.295: INFO: Pod pod-projected-configmaps-1d14bb91-a77c-4367-845d-2c77da18eb45 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:09.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3447" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:08.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:51:08.454: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6" in namespace "projected-8681" to be "Succeeded or Failed" Sep 3 13:51:08.456: INFO: Pod "downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102244ms Sep 3 13:51:10.459: INFO: Pod "downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005354606s STEP: Saw pod success Sep 3 13:51:10.460: INFO: Pod "downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6" satisfied condition "Succeeded or Failed" Sep 3 13:51:10.462: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6 container client-container: STEP: delete the pod Sep 3 13:51:10.475: INFO: Waiting for pod downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6 to disappear Sep 3 13:51:10.478: INFO: Pod downwardapi-volume-2679a3cd-a2c5-4cab-bac5-4512ef70c0d6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:10.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8681" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":205,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:57.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 3 13:51:01.740: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:01.744: INFO: Pod pod-with-prestop-exec-hook still exists Sep 3 13:51:03.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:03.748: INFO: Pod pod-with-prestop-exec-hook still exists Sep 3 13:51:05.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:05.749: INFO: Pod pod-with-prestop-exec-hook still exists Sep 3 13:51:07.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:07.749: INFO: Pod pod-with-prestop-exec-hook still exists Sep 3 13:51:09.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:09.748: INFO: Pod pod-with-prestop-exec-hook still exists Sep 3 13:51:11.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:11.749: INFO: Pod pod-with-prestop-exec-hook still exists Sep 3 13:51:13.744: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 3 13:51:13.748: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:13.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2073" for this suite. • [SLOW TEST:16.092 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":64,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:13.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:51:13.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181" in namespace "downward-api-241" to be "Succeeded or Failed" Sep 3 13:51:13.816: INFO: Pod "downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181": Phase="Pending", Reason="", readiness=false. Elapsed: 3.237171ms Sep 3 13:51:15.820: INFO: Pod "downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007537743s STEP: Saw pod success Sep 3 13:51:15.820: INFO: Pod "downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181" satisfied condition "Succeeded or Failed" Sep 3 13:51:15.824: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181 container client-container: STEP: delete the pod Sep 3 13:51:15.838: INFO: Waiting for pod downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181 to disappear Sep 3 13:51:15.841: INFO: Pod downwardapi-volume-8ce1a591-b973-4fd4-9446-9a432778b181 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:15.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-241" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:10.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 3 13:51:10.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4124 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Sep 3 13:51:10.679: INFO: stderr: "" Sep 3 13:51:10.679: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 3 13:51:10.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4124 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Sep 3 13:51:11.025: INFO: stderr: "" Sep 3 13:51:11.025: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 3 13:51:11.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4124 delete pods e2e-test-httpd-pod' Sep 3 13:51:23.763: INFO: stderr: "" Sep 3 13:51:23.763: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:23.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4124" for this suite. • [SLOW TEST:13.253 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:902 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":10,"skipped":221,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:15.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:51:16.456: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:51:19.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273876, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273876, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273876, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273876, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:51:22.542: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:22.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4821-crds.webhook.example.com via the AdmissionRegistration API Sep 3 13:51:23.429: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:24.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-95" for this suite. STEP: Destroying namespace "webhook-95-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.461 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":8,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:24.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:51:24.522: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95" in namespace "downward-api-4070" to be "Succeeded or Failed" Sep 3 13:51:24.525: INFO: Pod "downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95": Phase="Pending", Reason="", readiness=false. Elapsed: 3.33874ms Sep 3 13:51:26.529: INFO: Pod "downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007366807s STEP: Saw pod success Sep 3 13:51:26.529: INFO: Pod "downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95" satisfied condition "Succeeded or Failed" Sep 3 13:51:26.532: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95 container client-container: STEP: delete the pod Sep 3 13:51:26.618: INFO: Waiting for pod downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95 to disappear Sep 3 13:51:26.717: INFO: Pod downwardapi-volume-4484b7b7-4fbc-408c-badc-b003d957bd95 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:26.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4070" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":165,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:44.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0903 13:50:25.135582 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 3 13:51:27.153: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 3 13:51:27.153: INFO: Deleting pod "simpletest.rc-5jzms" in namespace "gc-7284" Sep 3 13:51:27.161: INFO: Deleting pod "simpletest.rc-cmsv4" in namespace "gc-7284" Sep 3 13:51:27.169: INFO: Deleting pod "simpletest.rc-g8tp7" in namespace "gc-7284" Sep 3 13:51:27.176: INFO: Deleting pod "simpletest.rc-gzk55" in namespace "gc-7284" Sep 3 13:51:27.182: INFO: Deleting pod "simpletest.rc-j9lql" in namespace "gc-7284" Sep 3 13:51:27.189: INFO: Deleting pod "simpletest.rc-jkc59" in namespace "gc-7284" Sep 3 13:51:27.195: INFO: Deleting pod "simpletest.rc-nqn4l" in namespace "gc-7284" Sep 3 13:51:27.208: INFO: Deleting pod "simpletest.rc-nvgt9" in namespace "gc-7284" Sep 3 13:51:27.214: INFO: Deleting pod "simpletest.rc-qcn2b" in namespace "gc-7284" Sep 3 13:51:27.225: INFO: Deleting pod "simpletest.rc-w2n2g" in namespace "gc-7284" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:27.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7284" for this suite. • [SLOW TEST:102.350 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:23.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5dd9422c-7a61-4255-8bff-dbef54fa92ef STEP: Creating a pod to test consume secrets Sep 3 13:51:24.043: INFO: Waiting up to 5m0s for pod "pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c" in namespace "secrets-4384" to be "Succeeded or Failed" Sep 3 13:51:24.047: INFO: Pod "pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.88807ms Sep 3 13:51:26.050: INFO: Pod "pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007501434s Sep 3 13:51:28.054: INFO: Pod "pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011302336s STEP: Saw pod success Sep 3 13:51:28.054: INFO: Pod "pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c" satisfied condition "Succeeded or Failed" Sep 3 13:51:28.057: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c container secret-volume-test: STEP: delete the pod Sep 3 13:51:28.072: INFO: Waiting for pod pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c to disappear Sep 3 13:51:28.074: INFO: Pod pod-secrets-9024b5af-9b0c-492d-9da1-b1f91da1a97c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4384" for this suite. STEP: Destroying namespace "secret-namespace-698" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":259,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:26.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5411 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5411 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5411 Sep 3 13:50:26.305: INFO: Found 0 stateful pods, waiting for 1 Sep 3 13:50:36.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 3 13:50:36.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:50:36.628: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:50:36.628: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:50:36.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:50:36.716: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 3 13:50:46.917: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:50:46.917: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:50:47.022: INFO: POD NODE PHASE GRACE CONDITIONS Sep 3 13:50:47.022: INFO: ss-0 capi-kali-md-0-76b6798f7f-5n8xl Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC }] Sep 3 13:50:47.022: INFO: ss-1 Pending [] Sep 3 13:50:47.022: INFO: Sep 3 13:50:47.022: INFO: StatefulSet ss has not reached scale 3, at 2 Sep 3 13:50:48.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.906472758s Sep 3 13:50:49.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.901479338s Sep 3 13:50:50.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.89719909s Sep 3 13:50:51.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.892733474s Sep 3 13:50:52.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.887206915s Sep 3 13:50:53.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.882257201s Sep 3 13:50:54.057: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.877055093s Sep 3 13:50:55.061: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.872014078s Sep 3 13:50:56.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 867.931478ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5411 Sep 3 13:50:57.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:50:57.334: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:50:57.334: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:50:57.334: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:50:57.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:50:57.599: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Sep 3 13:50:57.599: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:50:57.599: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:50:57.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:50:57.809: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Sep 3 13:50:57.809: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:50:57.809: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:50:57.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Sep 3 13:51:07.819: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:51:07.819: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:51:07.819: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 3 13:51:07.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:51:08.032: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:51:08.032: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:51:08.032: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:51:08.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:51:08.244: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:51:08.244: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:51:08.244: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:51:08.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5411 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:51:08.492: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:51:08.492: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:51:08.492: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:51:08.492: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:51:08.494: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 3 13:51:19.418: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:51:19.418: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:51:19.418: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:51:19.621: INFO: POD NODE PHASE GRACE CONDITIONS Sep 3 13:51:19.621: INFO: ss-0 capi-kali-md-0-76b6798f7f-5n8xl Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC }] Sep 3 13:51:19.621: INFO: ss-1 capi-kali-md-0-76b6798f7f-7jvhm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:19.621: INFO: ss-2 capi-kali-md-0-76b6798f7f-5n8xl Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:19.621: INFO: Sep 3 13:51:19.621: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 3 13:51:20.626: INFO: POD NODE PHASE GRACE CONDITIONS Sep 3 13:51:20.626: INFO: ss-0 capi-kali-md-0-76b6798f7f-5n8xl Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC }] Sep 3 13:51:20.626: INFO: ss-1 capi-kali-md-0-76b6798f7f-7jvhm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:20.626: INFO: ss-2 capi-kali-md-0-76b6798f7f-5n8xl Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:20.626: INFO: Sep 3 13:51:20.626: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 3 13:51:21.631: INFO: POD NODE PHASE GRACE CONDITIONS Sep 3 13:51:21.631: INFO: ss-0 capi-kali-md-0-76b6798f7f-5n8xl Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC }] Sep 3 13:51:21.631: INFO: ss-1 capi-kali-md-0-76b6798f7f-7jvhm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:21.631: INFO: ss-2 capi-kali-md-0-76b6798f7f-5n8xl Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:21.631: INFO: Sep 3 13:51:21.631: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 3 13:51:22.721: INFO: POD NODE PHASE GRACE CONDITIONS Sep 3 13:51:22.721: INFO: ss-0 capi-kali-md-0-76b6798f7f-5n8xl Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC }] Sep 3 13:51:22.721: INFO: ss-2 capi-kali-md-0-76b6798f7f-5n8xl Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:22.721: INFO: Sep 3 13:51:22.721: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 3 13:51:23.732: INFO: POD NODE PHASE GRACE CONDITIONS Sep 3 13:51:23.732: INFO: ss-0 capi-kali-md-0-76b6798f7f-5n8xl Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:26 +0000 UTC }] Sep 3 13:51:23.732: INFO: ss-2 capi-kali-md-0-76b6798f7f-5n8xl Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:51:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-03 13:50:47 +0000 UTC }] Sep 3 13:51:23.732: INFO: Sep 3 13:51:23.732: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 3 13:51:24.819: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.884441704s Sep 3 13:51:25.824: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.797177571s Sep 3 13:51:26.914: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.792312695s Sep 3 13:51:27.921: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.701941278s Sep 3 13:51:28.924: INFO: Verifying statefulset ss doesn't scale past 0 for another 695.588909ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5411 Sep 3 13:51:29.928: INFO: Scaling statefulset ss to 0 Sep 3 13:51:29.940: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 13:51:29.943: INFO: Deleting all statefulset in ns statefulset-5411 Sep 3 13:51:29.945: INFO: Scaling statefulset ss to 0 Sep 3 13:51:29.956: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:51:29.959: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:30.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5411" for this suite. • [SLOW TEST:63.956 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":13,"skipped":259,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:26.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 3 13:51:27.049: INFO: Waiting up to 5m0s for pod "pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7" in namespace "emptydir-6977" to be "Succeeded or Failed" Sep 3 13:51:27.052: INFO: Pod "pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941254ms Sep 3 13:51:29.055: INFO: Pod "pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649894s Sep 3 13:51:31.117: INFO: Pod "pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068860644s STEP: Saw pod success Sep 3 13:51:31.118: INFO: Pod "pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7" satisfied condition "Succeeded or Failed" Sep 3 13:51:31.226: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7 container test-container: STEP: delete the pod Sep 3 13:51:31.322: INFO: Waiting for pod pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7 to disappear Sep 3 13:51:31.617: INFO: Pod pod-e1675ca7-0388-4e75-ae50-4a391a1f59b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:31.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6977" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:27.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-93c0fff0-8f06-44e7-a4f0-b4994a89c86c STEP: Creating a pod to test consume secrets Sep 3 13:51:27.383: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82" in namespace "projected-9980" to be "Succeeded or Failed" Sep 3 13:51:27.386: INFO: Pod "pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493341ms Sep 3 13:51:29.389: INFO: Pod "pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005341982s Sep 3 13:51:31.619: INFO: Pod "pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.23525274s STEP: Saw pod success Sep 3 13:51:31.619: INFO: Pod "pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82" satisfied condition "Succeeded or Failed" Sep 3 13:51:31.622: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82 container projected-secret-volume-test: STEP: delete the pod Sep 3 13:51:32.023: INFO: Waiting for pod pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82 to disappear Sep 3 13:51:32.027: INFO: Pod pod-projected-secrets-bb1b92db-19b9-4a74-8646-74be87c90c82 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:32.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9980" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:09.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-vrb7 STEP: Creating a pod to test atomic-volume-subpath Sep 3 13:51:09.383: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vrb7" in namespace "subpath-4481" to be "Succeeded or Failed" Sep 3 13:51:09.385: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.581758ms Sep 3 13:51:11.389: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 2.006422889s Sep 3 13:51:13.393: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 4.010021007s Sep 3 13:51:15.397: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 6.014347271s Sep 3 13:51:17.402: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 8.019007398s Sep 3 13:51:19.416: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 10.03341581s Sep 3 13:51:21.517: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 12.134278442s Sep 3 13:51:23.528: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 14.145543353s Sep 3 13:51:25.532: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 16.149781681s Sep 3 13:51:27.538: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 18.15561199s Sep 3 13:51:29.618: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Running", Reason="", readiness=true. Elapsed: 20.235045914s Sep 3 13:51:31.622: INFO: Pod "pod-subpath-test-configmap-vrb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.239067258s STEP: Saw pod success Sep 3 13:51:31.622: INFO: Pod "pod-subpath-test-configmap-vrb7" satisfied condition "Succeeded or Failed" Sep 3 13:51:31.625: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-subpath-test-configmap-vrb7 container test-container-subpath-configmap-vrb7: STEP: delete the pod Sep 3 13:51:32.019: INFO: Waiting for pod pod-subpath-test-configmap-vrb7 to disappear Sep 3 13:51:32.022: INFO: Pod pod-subpath-test-configmap-vrb7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-vrb7 Sep 3 13:51:32.022: INFO: Deleting pod "pod-subpath-test-configmap-vrb7" in namespace "subpath-4481" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:32.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4481" for this suite. • [SLOW TEST:22.700 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:31.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 3 13:51:32.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-773 cluster-info' Sep 3 13:51:32.161: INFO: stderr: "" Sep 3 13:51:32.161: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.18.0.6:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:32.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-773" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":-1,"completed":11,"skipped":194,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:32.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 3 13:51:32.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2049 api-versions' Sep 3 13:51:32.427: INFO: stderr: "" Sep 3 13:51:32.427: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nlitmuschaos.io/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:32.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2049" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":12,"skipped":200,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:28.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-7e5dd6ba-4da4-4604-97d5-efdd9d77ba2e STEP: Creating secret with name secret-projected-all-test-volume-eaf37a09-402d-46f4-8816-2fc483fce7d6 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 3 13:51:28.186: INFO: Waiting up to 5m0s for pod "projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c" in namespace "projected-8277" to be "Succeeded or Failed" Sep 3 13:51:28.189: INFO: Pod "projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700888ms Sep 3 13:51:30.219: INFO: Pod "projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032541463s Sep 3 13:51:32.222: INFO: Pod "projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035634459s STEP: Saw pod success Sep 3 13:51:32.222: INFO: Pod "projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c" satisfied condition "Succeeded or Failed" Sep 3 13:51:32.319: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c container projected-all-volume-test: STEP: delete the pod Sep 3 13:51:32.429: INFO: Waiting for pod projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c to disappear Sep 3 13:51:32.625: INFO: Pod projected-volume-aa2675d2-8585-4f70-9e19-6ea8856a0d0c no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:32.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8277" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:30.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 3 13:51:30.819: INFO: Waiting up to 5m0s for pod "pod-69afa9e7-8b93-4fe7-afc2-636b324cc138" in namespace "emptydir-9968" to be "Succeeded or Failed" Sep 3 13:51:31.117: INFO: Pod "pod-69afa9e7-8b93-4fe7-afc2-636b324cc138": Phase="Pending", Reason="", readiness=false. Elapsed: 298.231698ms Sep 3 13:51:33.122: INFO: Pod "pod-69afa9e7-8b93-4fe7-afc2-636b324cc138": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302429944s Sep 3 13:51:35.126: INFO: Pod "pod-69afa9e7-8b93-4fe7-afc2-636b324cc138": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306482884s STEP: Saw pod success Sep 3 13:51:35.126: INFO: Pod "pod-69afa9e7-8b93-4fe7-afc2-636b324cc138" satisfied condition "Succeeded or Failed" Sep 3 13:51:35.129: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-69afa9e7-8b93-4fe7-afc2-636b324cc138 container test-container: STEP: delete the pod Sep 3 13:51:35.144: INFO: Waiting for pod pod-69afa9e7-8b93-4fe7-afc2-636b324cc138 to disappear Sep 3 13:51:35.147: INFO: Pod pod-69afa9e7-8b93-4fe7-afc2-636b324cc138 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:35.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9968" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:32.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:32.170: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d7ae5a2c-47f8-42ff-aadb-70348e223529" in namespace "security-context-test-2306" to be "Succeeded or Failed" Sep 3 13:51:32.172: INFO: Pod "alpine-nnp-false-d7ae5a2c-47f8-42ff-aadb-70348e223529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210777ms Sep 3 13:51:34.176: INFO: Pod "alpine-nnp-false-d7ae5a2c-47f8-42ff-aadb-70348e223529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005787743s Sep 3 13:51:36.179: INFO: Pod "alpine-nnp-false-d7ae5a2c-47f8-42ff-aadb-70348e223529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009271409s Sep 3 13:51:36.179: INFO: Pod "alpine-nnp-false-d7ae5a2c-47f8-42ff-aadb-70348e223529" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:36.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2306" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":122,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:36.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:36.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3691" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":8,"skipped":136,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:32.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:32.703: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a6ce1d09-8a33-4e85-aaf8-7564ec959aec" in namespace "security-context-test-9233" to be "Succeeded or Failed" Sep 3 13:51:32.706: INFO: Pod "busybox-user-65534-a6ce1d09-8a33-4e85-aaf8-7564ec959aec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.667083ms Sep 3 13:51:34.710: INFO: Pod "busybox-user-65534-a6ce1d09-8a33-4e85-aaf8-7564ec959aec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006806626s Sep 3 13:51:36.713: INFO: Pod "busybox-user-65534-a6ce1d09-8a33-4e85-aaf8-7564ec959aec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010134779s Sep 3 13:51:36.713: INFO: Pod "busybox-user-65534-a6ce1d09-8a33-4e85-aaf8-7564ec959aec" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:36.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9233" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":276,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:36.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:37.114: INFO: Checking APIGroup: apiregistration.k8s.io Sep 3 13:51:37.115: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 3 13:51:37.115: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.115: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 3 13:51:37.115: INFO: Checking APIGroup: extensions Sep 3 13:51:37.116: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 3 13:51:37.116: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 3 13:51:37.116: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 3 13:51:37.116: INFO: Checking APIGroup: apps Sep 3 13:51:37.117: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 3 13:51:37.117: INFO: Versions found [{apps/v1 v1}] Sep 3 13:51:37.117: INFO: apps/v1 matches apps/v1 Sep 3 13:51:37.117: INFO: Checking APIGroup: events.k8s.io Sep 3 13:51:37.118: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 3 13:51:37.118: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.118: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 3 13:51:37.118: INFO: Checking APIGroup: authentication.k8s.io Sep 3 13:51:37.119: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 3 13:51:37.119: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.119: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 3 13:51:37.119: INFO: Checking APIGroup: authorization.k8s.io Sep 3 13:51:37.121: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 3 13:51:37.121: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.121: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 3 13:51:37.121: INFO: Checking APIGroup: autoscaling Sep 3 13:51:37.122: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 3 13:51:37.122: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 3 13:51:37.122: INFO: autoscaling/v1 matches autoscaling/v1 Sep 3 13:51:37.122: INFO: Checking APIGroup: batch Sep 3 13:51:37.123: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 3 13:51:37.123: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 3 13:51:37.123: INFO: batch/v1 matches batch/v1 Sep 3 13:51:37.123: INFO: Checking APIGroup: certificates.k8s.io Sep 3 13:51:37.124: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 3 13:51:37.124: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.124: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 3 13:51:37.124: INFO: Checking APIGroup: networking.k8s.io Sep 3 13:51:37.125: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 3 13:51:37.125: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.125: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 3 13:51:37.125: INFO: Checking APIGroup: policy Sep 3 13:51:37.127: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 3 13:51:37.127: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 3 13:51:37.127: INFO: policy/v1beta1 matches policy/v1beta1 Sep 3 13:51:37.127: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 3 13:51:37.128: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 3 13:51:37.128: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.128: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 3 13:51:37.128: INFO: Checking APIGroup: storage.k8s.io Sep 3 13:51:37.129: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 3 13:51:37.129: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.129: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 3 13:51:37.129: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 3 13:51:37.130: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 3 13:51:37.130: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.130: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 3 13:51:37.130: INFO: Checking APIGroup: apiextensions.k8s.io Sep 3 13:51:37.131: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 3 13:51:37.131: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.131: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 3 13:51:37.131: INFO: Checking APIGroup: scheduling.k8s.io Sep 3 13:51:37.133: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 3 13:51:37.133: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.133: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 3 13:51:37.133: INFO: Checking APIGroup: coordination.k8s.io Sep 3 13:51:37.133: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 3 13:51:37.133: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.133: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 3 13:51:37.133: INFO: Checking APIGroup: node.k8s.io Sep 3 13:51:37.135: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 3 13:51:37.135: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.135: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 3 13:51:37.135: INFO: Checking APIGroup: discovery.k8s.io Sep 3 13:51:37.136: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 3 13:51:37.136: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 3 13:51:37.136: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Sep 3 13:51:37.136: INFO: Checking APIGroup: litmuschaos.io Sep 3 13:51:37.137: INFO: PreferredVersion.GroupVersion: litmuschaos.io/v1alpha1 Sep 3 13:51:37.137: INFO: Versions found [{litmuschaos.io/v1alpha1 v1alpha1}] Sep 3 13:51:37.137: INFO: litmuschaos.io/v1alpha1 matches litmuschaos.io/v1alpha1 Sep 3 13:51:37.137: INFO: Checking APIGroup: pingcap.com Sep 3 13:51:37.138: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Sep 3 13:51:37.138: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Sep 3 13:51:37.138: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:37.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-2674" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":14,"skipped":277,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:32.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:32.141: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:38.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8657" for this suite. • [SLOW TEST:6.100 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:37.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-96fcf690-554b-4055-b9e9-a01991b7b7c6 STEP: Creating a pod to test consume configMaps Sep 3 13:51:37.189: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71" in namespace "projected-3935" to be "Succeeded or Failed" Sep 3 13:51:37.190: INFO: Pod "pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71": Phase="Pending", Reason="", readiness=false. Elapsed: 1.824222ms Sep 3 13:51:39.195: INFO: Pod "pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0060169s Sep 3 13:51:41.198: INFO: Pod "pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00975267s Sep 3 13:51:43.202: INFO: Pod "pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013540522s STEP: Saw pod success Sep 3 13:51:43.202: INFO: Pod "pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71" satisfied condition "Succeeded or Failed" Sep 3 13:51:43.205: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71 container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:51:43.219: INFO: Waiting for pod pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71 to disappear Sep 3 13:51:43.221: INFO: Pod pod-projected-configmaps-aa2fc790-4410-40bc-8dad-5bb386542d71 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:43.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3935" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":281,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:35.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 3 13:51:41.797: INFO: Successfully updated pod "labelsupdateed7810f8-86d5-4435-87af-abe1500e09a1" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:43.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7285" for this suite. • [SLOW TEST:8.589 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:36.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:51:36.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:51:38.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:51:40.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273896, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:51:43.899: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6137" for this suite. STEP: Destroying namespace "webhook-6137-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.627 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":9,"skipped":149,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:32.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:44.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-76" for this suite. • [SLOW TEST:11.322 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":13,"skipped":313,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:43.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 3 13:51:43.353: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:46.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3488" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":16,"skipped":283,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:43.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:46.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9021" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":16,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:47.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-d1532ecc-fdb4-4ae8-b0e5-743f810c0982 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:47.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1306" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":17,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:38.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Sep 3 13:51:38.855: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:51:38.869: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:51:40.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:51:42.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273898, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:51:45.898: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:45.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7309-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:47.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2044" for this suite. STEP: Destroying namespace "webhook-2044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.867 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":10,"skipped":342,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:43.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:51:44.809: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:51:47.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:47.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5793" for this suite. STEP: Destroying namespace "webhook-5793-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":10,"skipped":165,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:44.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2262.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2262.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2262.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2262.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2262.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2262.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:51:48.228: INFO: DNS probes using dns-2262/dns-test-466ebce8-9a63-407d-b1cf-2203d94250e7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:48.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2262" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":317,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:47.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:47.213: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6b4af20b-d7e0-42f2-8bae-8d986c9a37a7" in namespace "security-context-test-8721" to be "Succeeded or Failed" Sep 3 13:51:47.216: INFO: Pod "busybox-privileged-false-6b4af20b-d7e0-42f2-8bae-8d986c9a37a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617835ms Sep 3 13:51:49.219: INFO: Pod "busybox-privileged-false-6b4af20b-d7e0-42f2-8bae-8d986c9a37a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005385345s Sep 3 13:51:49.219: INFO: Pod "busybox-privileged-false-6b4af20b-d7e0-42f2-8bae-8d986c9a37a7" satisfied condition "Succeeded or Failed" Sep 3 13:51:49.225: INFO: Got logs for pod "busybox-privileged-false-6b4af20b-d7e0-42f2-8bae-8d986c9a37a7": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:49.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8721" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:49.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 3 13:51:49.322: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 3 13:51:49.325: INFO: starting watch STEP: patching STEP: updating Sep 3 13:51:49.338: INFO: waiting for watch events with expected annotations Sep 3 13:51:49.338: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:49.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-8997" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":12,"skipped":373,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":10,"skipped":192,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:49.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 3 13:50:49.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047003 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:50:49.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047003 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 3 13:50:59.675: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047105 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:50:59.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047105 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 3 13:51:09.685: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047342 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:51:09.685: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047342 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 3 13:51:19.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047572 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:51:19.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-a 73dde876-92b2-4739-a27e-292bd39c8dc6 1047572 0 2021-09-03 13:50:49 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-09-03 13:50:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 3 13:51:29.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-b 8f18b8d8-c59e-4d53-8357-4deb55baa847 1047836 0 2021-09-03 13:51:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-09-03 13:51:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:51:29.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-b 8f18b8d8-c59e-4d53-8357-4deb55baa847 1047836 0 2021-09-03 13:51:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-09-03 13:51:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 3 13:51:39.833: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-b 8f18b8d8-c59e-4d53-8357-4deb55baa847 1048230 0 2021-09-03 13:51:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-09-03 13:51:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:51:39.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9974 /api/v1/namespaces/watch-9974/configmaps/e2e-watch-test-configmap-b 8f18b8d8-c59e-4d53-8357-4deb55baa847 1048230 0 2021-09-03 13:51:29 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-09-03 13:51:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:49.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9974" for this suite. • [SLOW TEST:60.215 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:47.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 3 13:51:47.149: INFO: Waiting up to 5m0s for pod "pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b" in namespace "emptydir-7678" to be "Succeeded or Failed" Sep 3 13:51:47.152: INFO: Pod "pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312208ms Sep 3 13:51:49.154: INFO: Pod "pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00511568s Sep 3 13:51:51.158: INFO: Pod "pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008897306s STEP: Saw pod success Sep 3 13:51:51.158: INFO: Pod "pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b" satisfied condition "Succeeded or Failed" Sep 3 13:51:51.162: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b container test-container: STEP: delete the pod Sep 3 13:51:51.177: INFO: Waiting for pod pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b to disappear Sep 3 13:51:51.180: INFO: Pod pod-4e9ca051-1c69-4694-8a96-aea4a544bc3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:51.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7678" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:47.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 3 13:51:47.970: INFO: Waiting up to 5m0s for pod "pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e" in namespace "emptydir-3747" to be "Succeeded or Failed" Sep 3 13:51:47.974: INFO: Pod "pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.278205ms Sep 3 13:51:49.977: INFO: Pod "pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006930271s Sep 3 13:51:51.980: INFO: Pod "pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010137655s STEP: Saw pod success Sep 3 13:51:51.981: INFO: Pod "pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e" satisfied condition "Succeeded or Failed" Sep 3 13:51:51.983: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e container test-container: STEP: delete the pod Sep 3 13:51:51.998: INFO: Waiting for pod pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e to disappear Sep 3 13:51:52.001: INFO: Pod pod-8e4d4b6b-2c74-426e-ad1b-1e5c991c430e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:52.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3747" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:48.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:52.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-899" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":321,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:49.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:49.445: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:53.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4245" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":389,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:53.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:51:53.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e" in namespace "projected-9574" to be "Succeeded or Failed" Sep 3 13:51:53.591: INFO: Pod "downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552816ms Sep 3 13:51:55.595: INFO: Pod "downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006196961s STEP: Saw pod success Sep 3 13:51:55.595: INFO: Pod "downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e" satisfied condition "Succeeded or Failed" Sep 3 13:51:55.598: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e container client-container: STEP: delete the pod Sep 3 13:51:55.614: INFO: Waiting for pod downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e to disappear Sep 3 13:51:55.617: INFO: Pod downwardapi-volume-ebf89606-b8ca-49b9-881e-d8fce45bdb0e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:55.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9574" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":391,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":11,"skipped":192,"failed":0} [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:49.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:53.910: INFO: Waiting up to 5m0s for pod "client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0" in namespace "pods-2048" to be "Succeeded or Failed" Sep 3 13:51:53.914: INFO: Pod "client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.159494ms Sep 3 13:51:55.918: INFO: Pod "client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007399412s STEP: Saw pod success Sep 3 13:51:55.918: INFO: Pod "client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0" satisfied condition "Succeeded or Failed" Sep 3 13:51:55.921: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0 container env3cont: STEP: delete the pod Sep 3 13:51:55.941: INFO: Waiting for pod client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0 to disappear Sep 3 13:51:55.943: INFO: Pod client-envvars-ec0a1e97-bf37-4a48-bc93-cdf4042daab0 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:55.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2048" for this suite. • [SLOW TEST:6.175 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:52.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 3 13:51:52.166: INFO: Waiting up to 5m0s for pod "var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978" in namespace "var-expansion-6499" to be "Succeeded or Failed" Sep 3 13:51:52.168: INFO: Pod "var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255815ms Sep 3 13:51:54.171: INFO: Pod "var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005282216s Sep 3 13:51:56.174: INFO: Pod "var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008264385s STEP: Saw pod success Sep 3 13:51:56.175: INFO: Pod "var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978" satisfied condition "Succeeded or Failed" Sep 3 13:51:56.178: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978 container dapi-container: STEP: delete the pod Sep 3 13:51:56.190: INFO: Waiting for pod var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978 to disappear Sep 3 13:51:56.192: INFO: Pod var-expansion-153fea54-1f80-458b-ba78-5d3bab7dc978 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:56.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6499" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":12,"skipped":252,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:56.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 3 13:51:56.252: INFO: Waiting up to 5m0s for pod "var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a" in namespace "var-expansion-3097" to be "Succeeded or Failed" Sep 3 13:51:56.255: INFO: Pod "var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582648ms Sep 3 13:51:58.258: INFO: Pod "var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005436954s STEP: Saw pod success Sep 3 13:51:58.258: INFO: Pod "var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a" satisfied condition "Succeeded or Failed" Sep 3 13:51:58.261: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a container dapi-container: STEP: delete the pod Sep 3 13:51:58.274: INFO: Waiting for pod var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a to disappear Sep 3 13:51:58.276: INFO: Pod var-expansion-37f1d5b8-ce71-4f81-97a7-ad73720b3b9a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:58.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3097" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":264,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:52.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-02cea34a-e5b8-408b-a5f9-d1e5e64d713a STEP: Creating a pod to test consume secrets Sep 3 13:51:52.388: INFO: Waiting up to 5m0s for pod "pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae" in namespace "secrets-1758" to be "Succeeded or Failed" Sep 3 13:51:52.391: INFO: Pod "pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602734ms Sep 3 13:51:54.393: INFO: Pod "pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005466108s Sep 3 13:51:56.397: INFO: Pod "pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008623129s Sep 3 13:51:58.400: INFO: Pod "pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011732921s STEP: Saw pod success Sep 3 13:51:58.400: INFO: Pod "pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae" satisfied condition "Succeeded or Failed" Sep 3 13:51:58.402: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae container secret-volume-test: STEP: delete the pod Sep 3 13:51:58.415: INFO: Waiting for pod pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae to disappear Sep 3 13:51:58.418: INFO: Pod pod-secrets-fcc49955-f4c1-43b3-9fa5-997777002eae no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1758" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":336,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:58.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:58.456: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:51:59.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3853" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":17,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:56.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:51:56.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662" in namespace "projected-6206" to be "Succeeded or Failed" Sep 3 13:51:56.160: INFO: Pod "downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.62214ms Sep 3 13:51:58.164: INFO: Pod "downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005924336s Sep 3 13:52:00.168: INFO: Pod "downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010080754s STEP: Saw pod success Sep 3 13:52:00.168: INFO: Pod "downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662" satisfied condition "Succeeded or Failed" Sep 3 13:52:00.171: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662 container client-container: STEP: delete the pod Sep 3 13:52:00.184: INFO: Waiting for pod downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662 to disappear Sep 3 13:52:00.187: INFO: Pod downwardapi-volume-1ba5cb80-d44c-4945-ac9f-2468cc0d1662 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:00.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6206" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":255,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:58.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 3 13:51:58.333: INFO: Waiting up to 5m0s for pod "downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e" in namespace "downward-api-9923" to be "Succeeded or Failed" Sep 3 13:51:58.336: INFO: Pod "downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.553917ms Sep 3 13:52:00.339: INFO: Pod "downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005840895s STEP: Saw pod success Sep 3 13:52:00.339: INFO: Pod "downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e" satisfied condition "Succeeded or Failed" Sep 3 13:52:00.342: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e container dapi-container: STEP: delete the pod Sep 3 13:52:00.355: INFO: Waiting for pod downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e to disappear Sep 3 13:52:00.358: INFO: Pod downward-api-7a50561a-f7fd-4c7f-a56f-613d95f9201e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:00.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9923" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":278,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:00.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:04.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3262" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":14,"skipped":263,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:55.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:51:55.673: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 3 13:51:59.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 create -f -' Sep 3 13:52:00.085: INFO: stderr: "" Sep 3 13:52:00.085: INFO: stdout: "e2e-test-crd-publish-openapi-4541-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 3 13:52:00.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 delete e2e-test-crd-publish-openapi-4541-crds test-foo' Sep 3 13:52:00.214: INFO: stderr: "" Sep 3 13:52:00.214: INFO: stdout: "e2e-test-crd-publish-openapi-4541-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 3 13:52:00.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 apply -f -' Sep 3 13:52:00.579: INFO: stderr: "" Sep 3 13:52:00.579: INFO: stdout: "e2e-test-crd-publish-openapi-4541-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 3 13:52:00.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 delete e2e-test-crd-publish-openapi-4541-crds test-foo' Sep 3 13:52:00.706: INFO: stderr: "" Sep 3 13:52:00.706: INFO: stdout: "e2e-test-crd-publish-openapi-4541-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 3 13:52:00.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 create -f -' Sep 3 13:52:00.963: INFO: rc: 1 Sep 3 13:52:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 apply -f -' Sep 3 13:52:01.223: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 3 13:52:01.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 create -f -' Sep 3 13:52:01.485: INFO: rc: 1 Sep 3 13:52:01.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 --namespace=crd-publish-openapi-9822 apply -f -' Sep 3 13:52:01.761: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 3 13:52:01.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 explain e2e-test-crd-publish-openapi-4541-crds' Sep 3 13:52:02.008: INFO: stderr: "" Sep 3 13:52:02.008: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 3 13:52:02.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 explain e2e-test-crd-publish-openapi-4541-crds.metadata' Sep 3 13:52:02.348: INFO: stderr: "" Sep 3 13:52:02.348: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 3 13:52:02.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 explain e2e-test-crd-publish-openapi-4541-crds.spec' Sep 3 13:52:02.632: INFO: stderr: "" Sep 3 13:52:02.632: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 3 13:52:02.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 explain e2e-test-crd-publish-openapi-4541-crds.spec.bars' Sep 3 13:52:02.897: INFO: stderr: "" Sep 3 13:52:02.897: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 3 13:52:02.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9822 explain e2e-test-crd-publish-openapi-4541-crds.spec.bars2' Sep 3 13:52:03.151: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:07.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9822" for this suite. • [SLOW TEST:11.438 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":15,"skipped":397,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:50:03.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 in namespace container-probe-1465 Sep 3 13:50:07.202: INFO: Started pod liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 in namespace container-probe-1465 STEP: checking the pod's current state and verifying that restartCount is present Sep 3 13:50:07.205: INFO: Initial restart count of pod liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 is 0 Sep 3 13:50:17.228: INFO: Restart count of pod container-probe-1465/liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 is now 1 (10.022955204s elapsed) Sep 3 13:50:37.429: INFO: Restart count of pod container-probe-1465/liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 is now 2 (30.223723548s elapsed) Sep 3 13:50:57.465: INFO: Restart count of pod container-probe-1465/liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 is now 3 (50.260212477s elapsed) Sep 3 13:51:17.502: INFO: Restart count of pod container-probe-1465/liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 is now 4 (1m10.297284596s elapsed) Sep 3 13:52:17.740: INFO: Restart count of pod container-probe-1465/liveness-cdaf9e8a-6cb5-4064-b770-5f426a7d9837 is now 5 (2m10.534784743s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:17.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1465" for this suite. • [SLOW TEST:134.598 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:17.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 3 13:52:17.811: INFO: Major version: 1 STEP: Confirm minor version Sep 3 13:52:17.811: INFO: cleanMinorVersion: 19 Sep 3 13:52:17.811: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:17.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-6132" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:17.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 3 13:52:19.966: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2785 PodName:pod-sharedvolume-c04050e7-278d-4f9b-8077-92d94b6f40e8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:52:19.966: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:52:20.076: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:20.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2785" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":7,"skipped":65,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:20.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 3 13:52:21.017: INFO: starting watch STEP: patching STEP: updating Sep 3 13:52:21.027: INFO: waiting for watch events with expected annotations Sep 3 13:52:21.027: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:21.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-5508" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:00.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:52:00.408: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 3 13:52:05.411: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 3 13:52:05.411: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 3 13:52:07.415: INFO: Creating deployment "test-rollover-deployment" Sep 3 13:52:07.423: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 3 13:52:09.430: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 3 13:52:09.437: INFO: Ensure that both replica sets have 1 created replica Sep 3 13:52:09.447: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 3 13:52:09.457: INFO: Updating deployment test-rollover-deployment Sep 3 13:52:09.457: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 3 13:52:11.464: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 3 13:52:11.471: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 3 13:52:11.478: INFO: all replica sets need to contain the pod-template-hash label Sep 3 13:52:11.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273931, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:52:13.487: INFO: all replica sets need to contain the pod-template-hash label Sep 3 13:52:13.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273931, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:52:15.491: INFO: all replica sets need to contain the pod-template-hash label Sep 3 13:52:15.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273931, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:52:17.487: INFO: all replica sets need to contain the pod-template-hash label Sep 3 13:52:17.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273931, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:52:19.486: INFO: all replica sets need to contain the pod-template-hash label Sep 3 13:52:19.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273931, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273927, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:52:21.486: INFO: Sep 3 13:52:21.486: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 13:52:21.495: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9106 /apis/apps/v1/namespaces/deployment-9106/deployments/test-rollover-deployment 2eacd886-59f7-4f55-81c2-6cdf3ca08f89 1049537 2 2021-09-03 13:52:07 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-09-03 13:52:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050150d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-09-03 13:52:07 +0000 UTC,LastTransitionTime:2021-09-03 13:52:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2021-09-03 13:52:21 +0000 UTC,LastTransitionTime:2021-09-03 13:52:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 3 13:52:21.498: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/test-rollover-deployment-5797c7764 554a69c2-acb1-49f2-a91a-696f5efeb11f 1049521 2 2021-09-03 13:52:09 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 2eacd886-59f7-4f55-81c2-6cdf3ca08f89 0xc0050155f0 0xc0050155f1}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2eacd886-59f7-4f55-81c2-6cdf3ca08f89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005015668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:52:21.498: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 3 13:52:21.498: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/test-rollover-controller 8b0c2366-b05a-4e3e-8000-c0cc2e7ed2df 1049535 2 2021-09-03 13:52:00 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 2eacd886-59f7-4f55-81c2-6cdf3ca08f89 0xc0050154e7 0xc0050154e8}] [] [{e2e.test Update apps/v1 2021-09-03 13:52:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2eacd886-59f7-4f55-81c2-6cdf3ca08f89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005015588 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:52:21.499: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/test-rollover-deployment-78bc8b888c 61120d9a-7141-4854-97bd-3b7f1fa176f7 1049358 2 2021-09-03 13:52:07 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 2eacd886-59f7-4f55-81c2-6cdf3ca08f89 0xc0050156d7 0xc0050156d8}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:52:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2eacd886-59f7-4f55-81c2-6cdf3ca08f89\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005015768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:52:21.502: INFO: Pod "test-rollover-deployment-5797c7764-bdhqr" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-bdhqr test-rollover-deployment-5797c7764- deployment-9106 /api/v1/namespaces/deployment-9106/pods/test-rollover-deployment-5797c7764-bdhqr 0d6636f6-0dc0-4c50-9d2e-938b78d96c90 1049378 0 2021-09-03 13:52:09 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 554a69c2-acb1-49f2-a91a-696f5efeb11f 0xc005015cf0 0xc005015cf1}] [] [{kube-controller-manager Update v1 2021-09-03 13:52:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"554a69c2-acb1-49f2-a91a-696f5efeb11f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:52:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5tb5g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5tb5g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5tb5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:52:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:52:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:52:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:52:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.136,StartTime:2021-09-03 13:52:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:52:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://e7bf7ad79f6e169588c35fd605704e8c2ae5b25d69ccb821c92eead692ce4587,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:21.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9106" for this suite. • [SLOW TEST:21.136 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":15,"skipped":282,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:21.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1307 STEP: creating the pod Sep 3 13:52:21.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 create -f -' Sep 3 13:52:21.484: INFO: stderr: "" Sep 3 13:52:21.484: INFO: stdout: "pod/pause created\n" Sep 3 13:52:21.485: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 3 13:52:21.485: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2358" to be "running and ready" Sep 3 13:52:21.487: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.820453ms Sep 3 13:52:23.491: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.00680005s Sep 3 13:52:23.491: INFO: Pod "pause" satisfied condition "running and ready" Sep 3 13:52:23.492: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 3 13:52:23.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 label pods pause testing-label=testing-label-value' Sep 3 13:52:23.629: INFO: stderr: "" Sep 3 13:52:23.629: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 3 13:52:23.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 get pod pause -L testing-label' Sep 3 13:52:23.756: INFO: stderr: "" Sep 3 13:52:23.756: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 3 13:52:23.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 label pods pause testing-label-' Sep 3 13:52:23.884: INFO: stderr: "" Sep 3 13:52:23.884: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 3 13:52:23.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 get pod pause -L testing-label' Sep 3 13:52:24.009: INFO: stderr: "" Sep 3 13:52:24.009: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 STEP: using delete to clean up resources Sep 3 13:52:24.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 delete --grace-period=0 --force -f -' Sep 3 13:52:24.131: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:52:24.131: INFO: stdout: "pod \"pause\" force deleted\n" Sep 3 13:52:24.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 get rc,svc -l name=pause --no-headers' Sep 3 13:52:24.253: INFO: stderr: "No resources found in kubectl-2358 namespace.\n" Sep 3 13:52:24.253: INFO: stdout: "" Sep 3 13:52:24.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2358 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 3 13:52:24.375: INFO: stderr: "" Sep 3 13:52:24.375: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:24.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2358" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":9,"skipped":77,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:05.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:52:05.064: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:52:07.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:09.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:11.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:13.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:15.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:17.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:19.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:21.066: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:23.067: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:25.068: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = false) Sep 3 13:52:27.068: INFO: The status of Pod test-webserver-0472f942-e8f1-4b2d-a8b5-bd38444e14d9 is Running (Ready = true) Sep 3 13:52:27.071: INFO: Container started at 2021-09-03 13:52:05 +0000 UTC, pod became ready at 2021-09-03 13:52:26 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:27.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3888" for this suite. • [SLOW TEST:22.055 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":267,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:07.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-kb7f STEP: Creating a pod to test atomic-volume-subpath Sep 3 13:52:07.151: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kb7f" in namespace "subpath-874" to be "Succeeded or Failed" Sep 3 13:52:07.154: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555298ms Sep 3 13:52:09.158: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 2.00630633s Sep 3 13:52:11.162: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.010270021s Sep 3 13:52:13.218: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 6.066407406s Sep 3 13:52:15.222: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 8.070282409s Sep 3 13:52:17.226: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 10.074242753s Sep 3 13:52:19.229: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 12.077978578s Sep 3 13:52:21.232: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 14.081193879s Sep 3 13:52:23.236: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 16.084953987s Sep 3 13:52:25.240: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 18.088553049s Sep 3 13:52:27.243: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Running", Reason="", readiness=true. Elapsed: 20.091827945s Sep 3 13:52:29.247: INFO: Pod "pod-subpath-test-projected-kb7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.095774128s STEP: Saw pod success Sep 3 13:52:29.247: INFO: Pod "pod-subpath-test-projected-kb7f" satisfied condition "Succeeded or Failed" Sep 3 13:52:29.250: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-subpath-test-projected-kb7f container test-container-subpath-projected-kb7f: STEP: delete the pod Sep 3 13:52:29.266: INFO: Waiting for pod pod-subpath-test-projected-kb7f to disappear Sep 3 13:52:29.269: INFO: Pod pod-subpath-test-projected-kb7f no longer exists STEP: Deleting pod pod-subpath-test-projected-kb7f Sep 3 13:52:29.269: INFO: Deleting pod "pod-subpath-test-projected-kb7f" in namespace "subpath-874" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:29.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-874" for this suite. • [SLOW TEST:22.178 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:29.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 3 13:52:31.935: INFO: Successfully updated pod "pod-update-a55a308e-1d11-4c3a-9ab8-168c05f35842" STEP: verifying the updated pod is in kubernetes Sep 3 13:52:32.226: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:32.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4841" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":434,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:24.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-crhg4 in namespace proxy-6963 I0903 13:52:24.441394 28 runners.go:190] Created replication controller with name: proxy-service-crhg4, namespace: proxy-6963, replica count: 1 I0903 13:52:25.491931 28 runners.go:190] proxy-service-crhg4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0903 13:52:26.492247 28 runners.go:190] proxy-service-crhg4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0903 13:52:27.492582 28 runners.go:190] proxy-service-crhg4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:52:27.496: INFO: setup took 3.069746569s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 3 13:52:27.504: INFO: (0) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 7.045213ms) Sep 3 13:52:27.504: INFO: (0) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 7.322262ms) Sep 3 13:52:27.504: INFO: (0) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 7.707426ms) Sep 3 13:52:27.504: INFO: (0) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 7.394966ms) Sep 3 13:52:27.504: INFO: (0) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 7.462449ms) Sep 3 13:52:27.504: INFO: (0) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 7.468171ms) Sep 3 13:52:27.505: INFO: (0) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 7.873398ms) Sep 3 13:52:27.505: INFO: (0) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 7.961395ms) Sep 3 13:52:27.505: INFO: (0) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 8.165426ms) Sep 3 13:52:27.508: INFO: (0) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 11.179728ms) Sep 3 13:52:27.508: INFO: (0) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 10.695769ms) Sep 3 13:52:27.513: INFO: (0) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 15.856257ms) Sep 3 13:52:27.513: INFO: (0) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 15.992304ms) Sep 3 13:52:27.514: INFO: (0) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 16.954099ms) Sep 3 13:52:27.514: INFO: (0) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 16.75812ms) Sep 3 13:52:27.514: INFO: (0) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 5.053607ms) Sep 3 13:52:27.519: INFO: (1) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.045684ms) Sep 3 13:52:27.519: INFO: (1) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.176977ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 5.57485ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.67145ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.572501ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 5.696138ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 5.749858ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 5.937177ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 5.986627ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 6.100225ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 5.993941ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 6.221155ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 6.176923ms) Sep 3 13:52:27.520: INFO: (1) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test<... (200; 6.387185ms) Sep 3 13:52:27.525: INFO: (2) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.379211ms) Sep 3 13:52:27.526: INFO: (2) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 5.387259ms) Sep 3 13:52:27.526: INFO: (2) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 5.469594ms) Sep 3 13:52:27.526: INFO: (2) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 5.693859ms) Sep 3 13:52:27.526: INFO: (2) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 5.683435ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 5.838006ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.903262ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 5.81337ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 5.96832ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.986069ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 6.067742ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: ... (200; 6.247404ms) Sep 3 13:52:27.527: INFO: (2) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 6.446408ms) Sep 3 13:52:27.531: INFO: (3) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 3.984818ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 4.376046ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.862611ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.84363ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.878277ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.952981ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 5.090259ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 5.095035ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.032837ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.161329ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 5.099392ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.267127ms) Sep 3 13:52:27.532: INFO: (3) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 5.349872ms) Sep 3 13:52:27.533: INFO: (3) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.456819ms) Sep 3 13:52:27.533: INFO: (3) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.51025ms) Sep 3 13:52:27.533: INFO: (3) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: ... (200; 5.191509ms) Sep 3 13:52:27.538: INFO: (4) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 5.251919ms) Sep 3 13:52:27.539: INFO: (4) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 6.208508ms) Sep 3 13:52:27.540: INFO: (4) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 7.054291ms) Sep 3 13:52:27.540: INFO: (4) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 7.616377ms) Sep 3 13:52:27.541: INFO: (4) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test<... (200; 8.435037ms) Sep 3 13:52:27.541: INFO: (4) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 8.384566ms) Sep 3 13:52:27.541: INFO: (4) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 8.362512ms) Sep 3 13:52:27.545: INFO: (5) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 3.520962ms) Sep 3 13:52:27.545: INFO: (5) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 3.923417ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.329647ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.278299ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.362139ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.64045ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.89696ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.887724ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 4.867097ms) Sep 3 13:52:27.546: INFO: (5) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.808089ms) Sep 3 13:52:27.547: INFO: (5) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: ... (200; 5.161759ms) Sep 3 13:52:27.547: INFO: (5) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.422916ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.357911ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.362776ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.766099ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 5.075288ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 5.145165ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.092246ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 5.201422ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 5.36797ms) Sep 3 13:52:27.552: INFO: (6) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 5.372244ms) Sep 3 13:52:27.553: INFO: (6) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 5.738244ms) Sep 3 13:52:27.553: INFO: (6) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.859596ms) Sep 3 13:52:27.553: INFO: (6) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.907592ms) Sep 3 13:52:27.553: INFO: (6) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 6.116692ms) Sep 3 13:52:27.553: INFO: (6) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 6.146284ms) Sep 3 13:52:27.557: INFO: (7) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.133733ms) Sep 3 13:52:27.558: INFO: (7) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 5.05211ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.312503ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 5.381733ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 5.430857ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 5.618055ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 5.53591ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.509027ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.754327ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 6.183429ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 6.004763ms) Sep 3 13:52:27.559: INFO: (7) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test<... (200; 6.263426ms) Sep 3 13:52:27.560: INFO: (7) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 6.15374ms) Sep 3 13:52:27.560: INFO: (7) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 6.336753ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.114724ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.151115ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.185239ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.446879ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.529555ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 4.673665ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 4.705733ms) Sep 3 13:52:27.564: INFO: (8) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.767044ms) Sep 3 13:52:27.565: INFO: (8) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 5.360755ms) Sep 3 13:52:27.565: INFO: (8) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 5.417753ms) Sep 3 13:52:27.565: INFO: (8) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 5.565875ms) Sep 3 13:52:27.565: INFO: (8) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 5.548933ms) Sep 3 13:52:27.565: INFO: (8) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test<... (200; 5.0607ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 5.062515ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.393489ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 5.539598ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.442702ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 5.458064ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.481897ms) Sep 3 13:52:27.571: INFO: (9) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 5.063998ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.141343ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 5.1528ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 5.130899ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 5.363595ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 5.351279ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 5.448829ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.359682ms) Sep 3 13:52:27.577: INFO: (10) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 4.703313ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.601302ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 4.667478ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.683184ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.831925ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.079338ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.107054ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 4.984359ms) Sep 3 13:52:27.582: INFO: (11) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.085898ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.593403ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.523955ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.547761ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.637959ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.704126ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 4.766471ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 4.800002ms) Sep 3 13:52:27.587: INFO: (12) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.725765ms) Sep 3 13:52:27.588: INFO: (12) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 5.511882ms) Sep 3 13:52:27.588: INFO: (12) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 5.709965ms) Sep 3 13:52:27.588: INFO: (12) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.722239ms) Sep 3 13:52:27.588: INFO: (12) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.625568ms) Sep 3 13:52:27.598: INFO: (13) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 9.798454ms) Sep 3 13:52:27.598: INFO: (13) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 9.837911ms) Sep 3 13:52:27.598: INFO: (13) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 9.985089ms) Sep 3 13:52:27.598: INFO: (13) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 9.910737ms) Sep 3 13:52:27.598: INFO: (13) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 10.135733ms) Sep 3 13:52:27.599: INFO: (13) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 10.288206ms) Sep 3 13:52:27.599: INFO: (13) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 11.15936ms) Sep 3 13:52:27.599: INFO: (13) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test<... (200; 11.115556ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 11.167468ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 11.193579ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 11.339143ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 11.558974ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 11.347337ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 11.459203ms) Sep 3 13:52:27.600: INFO: (13) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 11.369451ms) Sep 3 13:52:27.603: INFO: (14) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 2.670555ms) Sep 3 13:52:27.603: INFO: (14) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 2.872913ms) Sep 3 13:52:27.603: INFO: (14) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 2.890521ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 3.847571ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.090327ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.021076ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.033167ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 4.161895ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.162834ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 4.105774ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test<... (200; 4.326872ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 4.348069ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 4.385002ms) Sep 3 13:52:27.604: INFO: (14) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.360883ms) Sep 3 13:52:27.607: INFO: (15) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 2.594306ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.494255ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.739674ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 4.689902ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 4.665897ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.676918ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 4.726699ms) Sep 3 13:52:27.609: INFO: (15) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.90887ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.945753ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.893859ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 4.944149ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 5.049464ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 5.023578ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.022908ms) Sep 3 13:52:27.610: INFO: (15) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.004756ms) Sep 3 13:52:27.613: INFO: (16) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 3.591661ms) Sep 3 13:52:27.613: INFO: (16) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 3.556835ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 3.96581ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 3.987373ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 4.039469ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.074452ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.15055ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.259668ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 4.283685ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.330613ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 4.320507ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 4.401896ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 4.507615ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 4.621459ms) Sep 3 13:52:27.614: INFO: (16) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: ... (200; 3.862716ms) Sep 3 13:52:27.619: INFO: (17) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.516404ms) Sep 3 13:52:27.619: INFO: (17) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.676533ms) Sep 3 13:52:27.619: INFO: (17) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.690763ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.847639ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 4.829381ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 4.981685ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.890243ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.007867ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.894506ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 4.947473ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.969584ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 5.179884ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.167048ms) Sep 3 13:52:27.620: INFO: (17) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: ... (200; 4.881125ms) Sep 3 13:52:27.625: INFO: (18) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:460/proxy/: tls baz (200; 4.858464ms) Sep 3 13:52:27.625: INFO: (18) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: test (200; 5.007121ms) Sep 3 13:52:27.625: INFO: (18) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:162/proxy/: bar (200; 5.060732ms) Sep 3 13:52:27.625: INFO: (18) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:160/proxy/: foo (200; 5.223768ms) Sep 3 13:52:27.625: INFO: (18) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 5.294172ms) Sep 3 13:52:27.625: INFO: (18) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 5.277452ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname1/proxy/: foo (200; 4.227513ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname2/proxy/: bar (200; 4.396371ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:1080/proxy/: ... (200; 4.344284ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname2/proxy/: tls qux (200; 4.494302ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/services/http:proxy-service-crhg4:portname2/proxy/: bar (200; 4.545318ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw:1080/proxy/: test<... (200; 4.600345ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/services/proxy-service-crhg4:portname1/proxy/: foo (200; 4.485349ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/pods/proxy-service-crhg4-55pzw/proxy/: test (200; 4.569229ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/services/https:proxy-service-crhg4:tlsportname1/proxy/: tls baz (200; 4.555683ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/pods/http:proxy-service-crhg4-55pzw:160/proxy/: foo (200; 4.670081ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:462/proxy/: tls qux (200; 4.577283ms) Sep 3 13:52:27.630: INFO: (19) /api/v1/namespaces/proxy-6963/pods/https:proxy-service-crhg4-55pzw:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-a1872dcf-82e1-4e34-8c96-14536e5cb0d2 in namespace container-probe-8797 Sep 3 13:51:48.990: INFO: Started pod busybox-a1872dcf-82e1-4e34-8c96-14536e5cb0d2 in namespace container-probe-8797 STEP: checking the pod's current state and verifying that restartCount is present Sep 3 13:51:48.992: INFO: Initial restart count of pod busybox-a1872dcf-82e1-4e34-8c96-14536e5cb0d2 is 0 Sep 3 13:52:35.122: INFO: Restart count of pod container-probe-8797/busybox-a1872dcf-82e1-4e34-8c96-14536e5cb0d2 is now 1 (46.130409809s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:35.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8797" for this suite. • [SLOW TEST:48.193 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:35.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 3 13:52:35.252: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3196 /api/v1/namespaces/watch-3196/configmaps/e2e-watch-test-watch-closed 8bf1de78-630c-48f3-b592-1a20649c723d 1049876 0 2021-09-03 13:52:35 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-09-03 13:52:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:52:35.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3196 /api/v1/namespaces/watch-3196/configmaps/e2e-watch-test-watch-closed 8bf1de78-630c-48f3-b592-1a20649c723d 1049877 0 2021-09-03 13:52:35 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-09-03 13:52:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 3 13:52:35.267: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3196 /api/v1/namespaces/watch-3196/configmaps/e2e-watch-test-watch-closed 8bf1de78-630c-48f3-b592-1a20649c723d 1049878 0 2021-09-03 13:52:35 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-09-03 13:52:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:52:35.268: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3196 /api/v1/namespaces/watch-3196/configmaps/e2e-watch-test-watch-closed 8bf1de78-630c-48f3-b592-1a20649c723d 1049879 0 2021-09-03 13:52:35 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-09-03 13:52:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:35.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3196" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":18,"skipped":320,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:46.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 3 13:51:47.281: INFO: Successfully updated pod "var-expansion-3d28dbe4-99ec-4d90-a67c-d69928fe4795" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 3 13:51:49.287: INFO: Deleting pod "var-expansion-3d28dbe4-99ec-4d90-a67c-d69928fe4795" in namespace "var-expansion-6722" Sep 3 13:51:49.291: INFO: Wait up to 5m0s for pod "var-expansion-3d28dbe4-99ec-4d90-a67c-d69928fe4795" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:35.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6722" for this suite. • [SLOW TEST:168.576 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":2,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:32.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 3 13:52:34.816: INFO: Successfully updated pod "labelsupdate07041986-b711-4dea-a31c-aee9bdaf329f" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:36.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6379" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:39.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8805 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 3 13:49:39.488: INFO: Found 0 stateful pods, waiting for 3 Sep 3 13:49:49.492: INFO: Found 2 stateful pods, waiting for 3 Sep 3 13:49:59.492: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:49:59.492: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:49:59.492: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:49:59.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8805 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:49:59.725: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:49:59.725: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:49:59.725: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 3 13:50:09.759: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 3 13:50:19.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8805 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:50:19.987: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:50:19.987: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:50:19.987: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:50:30.009: INFO: Waiting for StatefulSet statefulset-8805/ss2 to complete update Sep 3 13:50:30.009: INFO: Waiting for Pod statefulset-8805/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:50:30.009: INFO: Waiting for Pod statefulset-8805/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:50:30.009: INFO: Waiting for Pod statefulset-8805/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:50:40.016: INFO: Waiting for StatefulSet statefulset-8805/ss2 to complete update Sep 3 13:50:40.016: INFO: Waiting for Pod statefulset-8805/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:50:40.016: INFO: Waiting for Pod statefulset-8805/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:50:50.016: INFO: Waiting for StatefulSet statefulset-8805/ss2 to complete update Sep 3 13:50:50.016: INFO: Waiting for Pod statefulset-8805/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Sep 3 13:51:00.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8805 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:51:00.296: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:51:00.296: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:51:00.296: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:51:10.330: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 3 13:51:20.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8805 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:51:20.626: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:51:20.626: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:51:20.626: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:51:31.227: INFO: Waiting for StatefulSet statefulset-8805/ss2 to complete update Sep 3 13:51:31.227: INFO: Waiting for Pod statefulset-8805/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 3 13:51:31.227: INFO: Waiting for Pod statefulset-8805/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 3 13:51:31.227: INFO: Waiting for Pod statefulset-8805/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 3 13:51:41.233: INFO: Waiting for StatefulSet statefulset-8805/ss2 to complete update Sep 3 13:51:41.233: INFO: Waiting for Pod statefulset-8805/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 3 13:51:41.233: INFO: Waiting for Pod statefulset-8805/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 3 13:51:51.235: INFO: Waiting for StatefulSet statefulset-8805/ss2 to complete update Sep 3 13:51:51.235: INFO: Waiting for Pod statefulset-8805/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 13:52:01.234: INFO: Deleting all statefulset in ns statefulset-8805 Sep 3 13:52:01.236: INFO: Scaling statefulset ss2 to 0 Sep 3 13:52:41.250: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:52:41.254: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:41.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8805" for this suite. • [SLOW TEST:181.830 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":5,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:35.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-6538 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6538 to expose endpoints map[] Sep 3 13:52:35.388: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Sep 3 13:52:36.397: INFO: successfully validated that service endpoint-test2 in namespace services-6538 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6538 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6538 to expose endpoints map[pod1:[80]] Sep 3 13:52:40.416: INFO: successfully validated that service endpoint-test2 in namespace services-6538 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6538 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6538 to expose endpoints map[pod1:[80] pod2:[80]] Sep 3 13:52:43.437: INFO: successfully validated that service endpoint-test2 in namespace services-6538 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6538 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6538 to expose endpoints map[pod2:[80]] Sep 3 13:52:43.527: INFO: successfully validated that service endpoint-test2 in namespace services-6538 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6538 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6538 to expose endpoints map[] Sep 3 13:52:43.539: INFO: successfully validated that service endpoint-test2 in namespace services-6538 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:43.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6538" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.215 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":3,"skipped":125,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:41.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:52:41.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d" in namespace "downward-api-1929" to be "Succeeded or Failed" Sep 3 13:52:41.356: INFO: Pod "downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.15165ms Sep 3 13:52:43.360: INFO: Pod "downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006862477s Sep 3 13:52:45.363: INFO: Pod "downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010237801s Sep 3 13:52:47.368: INFO: Pod "downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014318297s STEP: Saw pod success Sep 3 13:52:47.368: INFO: Pod "downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d" satisfied condition "Succeeded or Failed" Sep 3 13:52:47.370: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d container client-container: STEP: delete the pod Sep 3 13:52:47.386: INFO: Waiting for pod downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d to disappear Sep 3 13:52:47.388: INFO: Pod downwardapi-volume-4db5bd6d-793f-41e3-af9b-46c0878b824d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1929" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":136,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:33.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7332 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7332 STEP: creating replication controller externalsvc in namespace services-7332 I0903 13:52:33.772481 28 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7332, replica count: 2 I0903 13:52:36.823112 28 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 3 13:52:36.846: INFO: Creating new exec pod Sep 3 13:52:38.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7332 exec execpodwqvbl -- /bin/sh -x -c nslookup nodeport-service.services-7332.svc.cluster.local' Sep 3 13:52:39.137: INFO: stderr: "+ nslookup nodeport-service.services-7332.svc.cluster.local\n" Sep 3 13:52:39.137: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-7332.svc.cluster.local\tcanonical name = externalsvc.services-7332.svc.cluster.local.\nName:\texternalsvc.services-7332.svc.cluster.local\nAddress: 10.143.54.211\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7332, will wait for the garbage collector to delete the pods Sep 3 13:52:39.196: INFO: Deleting ReplicationController externalsvc took: 5.155813ms Sep 3 13:52:39.696: INFO: Terminating ReplicationController externalsvc pods took: 500.253521ms Sep 3 13:52:53.709: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:53.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7332" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.012 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:51.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0903 13:51:52.302014 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 3 13:52:54.324: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:54.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5170" for this suite. • [SLOW TEST:63.105 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":19,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:54.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:56.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4853" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":443,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:47.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:52:47.449: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 3 13:52:51.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9000 --namespace=crd-publish-openapi-9000 create -f -' Sep 3 13:52:51.728: INFO: stderr: "" Sep 3 13:52:51.728: INFO: stdout: "e2e-test-crd-publish-openapi-9611-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 3 13:52:51.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9000 --namespace=crd-publish-openapi-9000 delete e2e-test-crd-publish-openapi-9611-crds test-cr' Sep 3 13:52:51.864: INFO: stderr: "" Sep 3 13:52:51.865: INFO: stdout: "e2e-test-crd-publish-openapi-9611-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 3 13:52:51.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9000 --namespace=crd-publish-openapi-9000 apply -f -' Sep 3 13:52:52.158: INFO: stderr: "" Sep 3 13:52:52.158: INFO: stdout: "e2e-test-crd-publish-openapi-9611-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 3 13:52:52.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9000 --namespace=crd-publish-openapi-9000 delete e2e-test-crd-publish-openapi-9611-crds test-cr' Sep 3 13:52:52.286: INFO: stderr: "" Sep 3 13:52:52.286: INFO: stdout: "e2e-test-crd-publish-openapi-9611-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 3 13:52:52.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9000 explain e2e-test-crd-publish-openapi-9611-crds' Sep 3 13:52:52.551: INFO: stderr: "" Sep 3 13:52:52.551: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9611-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:56.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9000" for this suite. • [SLOW TEST:9.296 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":7,"skipped":147,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:51:59.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5185 Sep 3 13:52:01.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 3 13:52:01.771: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Sep 3 13:52:01.771: INFO: stdout: "iptables" Sep 3 13:52:01.771: INFO: proxyMode: iptables Sep 3 13:52:01.775: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:01.779: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:52:03.780: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:03.783: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:52:05.780: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:05.783: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:52:07.780: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:07.783: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:52:09.780: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:09.784: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:52:11.780: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:11.783: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:52:13.780: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:52:13.783: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-5185 STEP: creating replication controller affinity-nodeport-timeout in namespace services-5185 I0903 13:52:13.798723 27 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5185, replica count: 3 I0903 13:52:16.849365 27 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:52:16.861: INFO: Creating new exec pod Sep 3 13:52:19.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 3 13:52:20.145: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Sep 3 13:52:20.145: INFO: stdout: "" Sep 3 13:52:20.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c nc -zv -t -w 2 10.128.11.157 80' Sep 3 13:52:20.396: INFO: stderr: "+ nc -zv -t -w 2 10.128.11.157 80\nConnection to 10.128.11.157 80 port [tcp/http] succeeded!\n" Sep 3 13:52:20.397: INFO: stdout: "" Sep 3 13:52:20.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 31040' Sep 3 13:52:20.645: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.9 31040\nConnection to 172.18.0.9 31040 port [tcp/31040] succeeded!\n" Sep 3 13:52:20.645: INFO: stdout: "" Sep 3 13:52:20.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 31040' Sep 3 13:52:20.868: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 31040\nConnection to 172.18.0.10 31040 port [tcp/31040] succeeded!\n" Sep 3 13:52:20.868: INFO: stdout: "" Sep 3 13:52:20.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.9:31040/ ; done' Sep 3 13:52:21.216: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n" Sep 3 13:52:21.216: INFO: stdout: "\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x\naffinity-nodeport-timeout-fcg5x" Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Received response from host: affinity-nodeport-timeout-fcg5x Sep 3 13:52:21.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.9:31040/' Sep 3 13:52:21.465: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n" Sep 3 13:52:21.465: INFO: stdout: "affinity-nodeport-timeout-fcg5x" Sep 3 13:52:36.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.9:31040/' Sep 3 13:52:36.705: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n" Sep 3 13:52:36.705: INFO: stdout: "affinity-nodeport-timeout-fcg5x" Sep 3 13:52:51.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5185 exec execpod-affinity9d4m5 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.9:31040/' Sep 3 13:52:51.967: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.9:31040/\n" Sep 3 13:52:51.967: INFO: stdout: "affinity-nodeport-timeout-j7bjl" Sep 3 13:52:51.967: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5185, will wait for the garbage collector to delete the pods Sep 3 13:52:52.035: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.843165ms Sep 3 13:52:52.135: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.275354ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:56.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5185" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:57.426 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":361,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:56.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 3 13:52:56.949: INFO: Waiting up to 5m0s for pod "pod-043192bf-fc72-418c-af39-a615604804d7" in namespace "emptydir-6631" to be "Succeeded or Failed" Sep 3 13:52:56.951: INFO: Pod "pod-043192bf-fc72-418c-af39-a615604804d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.444409ms Sep 3 13:52:59.221: INFO: Pod "pod-043192bf-fc72-418c-af39-a615604804d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.272060826s STEP: Saw pod success Sep 3 13:52:59.221: INFO: Pod "pod-043192bf-fc72-418c-af39-a615604804d7" satisfied condition "Succeeded or Failed" Sep 3 13:52:59.228: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-043192bf-fc72-418c-af39-a615604804d7 container test-container: STEP: delete the pod Sep 3 13:52:59.242: INFO: Waiting for pod pod-043192bf-fc72-418c-af39-a615604804d7 to disappear Sep 3 13:52:59.245: INFO: Pod pod-043192bf-fc72-418c-af39-a615604804d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:59.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6631" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":159,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:56.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9c92c430-561c-4c68-a63c-afa742e3ee7b STEP: Creating a pod to test consume configMaps Sep 3 13:52:57.009: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c" in namespace "projected-7802" to be "Succeeded or Failed" Sep 3 13:52:57.012: INFO: Pod "pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620947ms Sep 3 13:52:59.227: INFO: Pod "pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.217529978s STEP: Saw pod success Sep 3 13:52:59.227: INFO: Pod "pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c" satisfied condition "Succeeded or Failed" Sep 3 13:52:59.231: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:52:59.245: INFO: Waiting for pod pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c to disappear Sep 3 13:52:59.247: INFO: Pod pod-projected-configmaps-d78339ad-7b16-4ad6-ac6c-9a75f98a4f2c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:59.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7802" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":375,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:43.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:52:43.952: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:52:45.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273963, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273963, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273963, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273963, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:52:48.978: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:52:59.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6894" for this suite. STEP: Destroying namespace "webhook-6894-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.766 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":4,"skipped":126,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:56.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-b2a3c542-4b23-4a43-a3a8-c5546618784d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:00.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8291" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:59.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-b6736368-0e4f-417d-94bd-7f9140663d7f STEP: Creating a pod to test consume secrets Sep 3 13:52:59.299: INFO: Waiting up to 5m0s for pod "pod-secrets-5cb31a44-6383-4789-9640-e49b31137118" in namespace "secrets-5917" to be "Succeeded or Failed" Sep 3 13:52:59.301: INFO: Pod "pod-secrets-5cb31a44-6383-4789-9640-e49b31137118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110215ms Sep 3 13:53:01.422: INFO: Pod "pod-secrets-5cb31a44-6383-4789-9640-e49b31137118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.122641344s STEP: Saw pod success Sep 3 13:53:01.422: INFO: Pod "pod-secrets-5cb31a44-6383-4789-9640-e49b31137118" satisfied condition "Succeeded or Failed" Sep 3 13:53:01.625: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-secrets-5cb31a44-6383-4789-9640-e49b31137118 container secret-volume-test: STEP: delete the pod Sep 3 13:53:01.837: INFO: Waiting for pod pod-secrets-5cb31a44-6383-4789-9640-e49b31137118 to disappear Sep 3 13:53:01.843: INFO: Pod pod-secrets-5cb31a44-6383-4789-9640-e49b31137118 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:01.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5917" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":11,"skipped":86,"failed":0} [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:53.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:52:53.756: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1268 I0903 13:52:53.776783 28 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1268, replica count: 1 I0903 13:52:54.827412 28 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:52:54.938: INFO: Created: latency-svc-qzqbb Sep 3 13:52:54.943: INFO: Got endpoints: latency-svc-qzqbb [16.057053ms] Sep 3 13:52:54.953: INFO: Created: latency-svc-q28jx Sep 3 13:52:54.956: INFO: Got endpoints: latency-svc-q28jx [12.176122ms] Sep 3 13:52:54.959: INFO: Created: latency-svc-s9kt7 Sep 3 13:52:54.961: INFO: Got endpoints: latency-svc-s9kt7 [17.81844ms] Sep 3 13:52:54.964: INFO: Created: latency-svc-mjw7k Sep 3 13:52:54.966: INFO: Got endpoints: latency-svc-mjw7k [22.376074ms] Sep 3 13:52:54.969: INFO: Created: latency-svc-wrc8w Sep 3 13:52:54.972: INFO: Got endpoints: latency-svc-wrc8w [28.281428ms] Sep 3 13:52:54.973: INFO: Created: latency-svc-jmpgw Sep 3 13:52:54.975: INFO: Got endpoints: latency-svc-jmpgw [31.518463ms] Sep 3 13:52:54.983: INFO: Created: latency-svc-668fs Sep 3 13:52:54.985: INFO: Got endpoints: latency-svc-668fs [41.710954ms] Sep 3 13:52:54.988: INFO: Created: latency-svc-wc9fh Sep 3 13:52:54.991: INFO: Got endpoints: latency-svc-wc9fh [46.976796ms] Sep 3 13:52:54.994: INFO: Created: latency-svc-qxkjw Sep 3 13:52:54.996: INFO: Got endpoints: latency-svc-qxkjw [52.940243ms] Sep 3 13:52:54.999: INFO: Created: latency-svc-kvk7q Sep 3 13:52:55.002: INFO: Got endpoints: latency-svc-kvk7q [58.288988ms] Sep 3 13:52:55.006: INFO: Created: latency-svc-5t7p5 Sep 3 13:52:55.008: INFO: Got endpoints: latency-svc-5t7p5 [64.549962ms] Sep 3 13:52:55.012: INFO: Created: latency-svc-hlwzv Sep 3 13:52:55.014: INFO: Got endpoints: latency-svc-hlwzv [70.488467ms] Sep 3 13:52:55.034: INFO: Created: latency-svc-rcccg Sep 3 13:52:55.037: INFO: Got endpoints: latency-svc-rcccg [93.023564ms] Sep 3 13:52:55.043: INFO: Created: latency-svc-qz9nt Sep 3 13:52:55.046: INFO: Got endpoints: latency-svc-qz9nt [101.760251ms] Sep 3 13:52:55.051: INFO: Created: latency-svc-6qg88 Sep 3 13:52:55.054: INFO: Got endpoints: latency-svc-6qg88 [109.908918ms] Sep 3 13:52:55.059: INFO: Created: latency-svc-qg4hw Sep 3 13:52:55.062: INFO: Got endpoints: latency-svc-qg4hw [118.496235ms] Sep 3 13:52:55.065: INFO: Created: latency-svc-jknbd Sep 3 13:52:55.068: INFO: Got endpoints: latency-svc-jknbd [112.082013ms] Sep 3 13:52:55.072: INFO: Created: latency-svc-jmt9x Sep 3 13:52:55.074: INFO: Got endpoints: latency-svc-jmt9x [113.070353ms] Sep 3 13:52:55.079: INFO: Created: latency-svc-7b224 Sep 3 13:52:55.081: INFO: Got endpoints: latency-svc-7b224 [115.340427ms] Sep 3 13:52:55.090: INFO: Created: latency-svc-94zgf Sep 3 13:52:55.092: INFO: Got endpoints: latency-svc-94zgf [120.140601ms] Sep 3 13:52:55.095: INFO: Created: latency-svc-xgt98 Sep 3 13:52:55.098: INFO: Got endpoints: latency-svc-xgt98 [122.666631ms] Sep 3 13:52:55.101: INFO: Created: latency-svc-z8rfx Sep 3 13:52:55.103: INFO: Got endpoints: latency-svc-z8rfx [117.425333ms] Sep 3 13:52:55.106: INFO: Created: latency-svc-7hj9d Sep 3 13:52:55.126: INFO: Got endpoints: latency-svc-7hj9d [135.80091ms] Sep 3 13:52:55.227: INFO: Created: latency-svc-xc8j4 Sep 3 13:52:55.231: INFO: Got endpoints: latency-svc-xc8j4 [234.648784ms] Sep 3 13:52:55.234: INFO: Created: latency-svc-8cdjl Sep 3 13:52:55.236: INFO: Got endpoints: latency-svc-8cdjl [234.375551ms] Sep 3 13:52:55.240: INFO: Created: latency-svc-stkgk Sep 3 13:52:55.243: INFO: Got endpoints: latency-svc-stkgk [234.790497ms] Sep 3 13:52:55.248: INFO: Created: latency-svc-r2qq2 Sep 3 13:52:55.252: INFO: Got endpoints: latency-svc-r2qq2 [238.287276ms] Sep 3 13:52:55.254: INFO: Created: latency-svc-2rr7s Sep 3 13:52:55.257: INFO: Got endpoints: latency-svc-2rr7s [220.623986ms] Sep 3 13:52:55.263: INFO: Created: latency-svc-sknnt Sep 3 13:52:55.266: INFO: Got endpoints: latency-svc-sknnt [220.540716ms] Sep 3 13:52:55.269: INFO: Created: latency-svc-2blxm Sep 3 13:52:55.272: INFO: Got endpoints: latency-svc-2blxm [218.181689ms] Sep 3 13:52:55.273: INFO: Created: latency-svc-5clbs Sep 3 13:52:55.276: INFO: Got endpoints: latency-svc-5clbs [213.726415ms] Sep 3 13:52:55.279: INFO: Created: latency-svc-ljmwx Sep 3 13:52:55.281: INFO: Got endpoints: latency-svc-ljmwx [213.59484ms] Sep 3 13:52:55.287: INFO: Created: latency-svc-shgxm Sep 3 13:52:55.289: INFO: Got endpoints: latency-svc-shgxm [214.858544ms] Sep 3 13:52:55.293: INFO: Created: latency-svc-sb8rm Sep 3 13:52:55.296: INFO: Got endpoints: latency-svc-sb8rm [214.919367ms] Sep 3 13:52:55.298: INFO: Created: latency-svc-thbmt Sep 3 13:52:55.300: INFO: Got endpoints: latency-svc-thbmt [207.840618ms] Sep 3 13:52:55.302: INFO: Created: latency-svc-lb8bg Sep 3 13:52:55.304: INFO: Got endpoints: latency-svc-lb8bg [206.046943ms] Sep 3 13:52:55.307: INFO: Created: latency-svc-pdh2j Sep 3 13:52:55.310: INFO: Got endpoints: latency-svc-pdh2j [206.655151ms] Sep 3 13:52:55.312: INFO: Created: latency-svc-smq6p Sep 3 13:52:55.316: INFO: Created: latency-svc-ln8xw Sep 3 13:52:55.333: INFO: Created: latency-svc-kvzdk Sep 3 13:52:55.339: INFO: Created: latency-svc-hhxxd Sep 3 13:52:55.342: INFO: Got endpoints: latency-svc-smq6p [215.616092ms] Sep 3 13:52:55.345: INFO: Created: latency-svc-rvglt Sep 3 13:52:55.350: INFO: Created: latency-svc-2rnjd Sep 3 13:52:55.355: INFO: Created: latency-svc-wphpx Sep 3 13:52:55.362: INFO: Created: latency-svc-dbjtb Sep 3 13:52:55.369: INFO: Created: latency-svc-6hdhm Sep 3 13:52:55.376: INFO: Created: latency-svc-r8g2r Sep 3 13:52:55.383: INFO: Created: latency-svc-qvjbx Sep 3 13:52:55.389: INFO: Created: latency-svc-tsm8v Sep 3 13:52:55.394: INFO: Got endpoints: latency-svc-ln8xw [162.911233ms] Sep 3 13:52:55.399: INFO: Created: latency-svc-kssgt Sep 3 13:52:55.406: INFO: Created: latency-svc-5q57t Sep 3 13:52:55.411: INFO: Created: latency-svc-4tpwh Sep 3 13:52:55.523: INFO: Got endpoints: latency-svc-kvzdk [286.442367ms] Sep 3 13:52:55.523: INFO: Created: latency-svc-pk86n Sep 3 13:52:55.523: INFO: Got endpoints: latency-svc-hhxxd [280.215498ms] Sep 3 13:52:55.618: INFO: Created: latency-svc-854j4 Sep 3 13:52:55.619: INFO: Got endpoints: latency-svc-rvglt [366.075135ms] Sep 3 13:52:55.619: INFO: Got endpoints: latency-svc-2rnjd [361.158084ms] Sep 3 13:52:55.820: INFO: Got endpoints: latency-svc-wphpx [553.927825ms] Sep 3 13:52:55.821: INFO: Got endpoints: latency-svc-dbjtb [549.476278ms] Sep 3 13:52:55.821: INFO: Got endpoints: latency-svc-6hdhm [545.697993ms] Sep 3 13:52:55.822: INFO: Got endpoints: latency-svc-r8g2r [540.237634ms] Sep 3 13:52:55.823: INFO: Created: latency-svc-nnjqq Sep 3 13:52:55.829: INFO: Created: latency-svc-4nvgc Sep 3 13:52:55.835: INFO: Created: latency-svc-zp4hn Sep 3 13:52:55.843: INFO: Created: latency-svc-dfrc5 Sep 3 13:52:55.843: INFO: Got endpoints: latency-svc-qvjbx [553.40717ms] Sep 3 13:52:55.848: INFO: Created: latency-svc-sk5s7 Sep 3 13:52:55.856: INFO: Created: latency-svc-t2w6l Sep 3 13:52:55.861: INFO: Created: latency-svc-8c44z Sep 3 13:52:55.867: INFO: Created: latency-svc-kwwzp Sep 3 13:52:55.872: INFO: Created: latency-svc-h9jtv Sep 3 13:52:55.892: INFO: Got endpoints: latency-svc-tsm8v [596.09674ms] Sep 3 13:52:55.902: INFO: Created: latency-svc-zkqlz Sep 3 13:52:55.943: INFO: Got endpoints: latency-svc-5q57t [638.701676ms] Sep 3 13:52:55.960: INFO: Created: latency-svc-pml9k Sep 3 13:52:55.992: INFO: Got endpoints: latency-svc-kssgt [692.347005ms] Sep 3 13:52:56.003: INFO: Created: latency-svc-fr6q6 Sep 3 13:52:56.119: INFO: Got endpoints: latency-svc-4tpwh [808.882932ms] Sep 3 13:52:56.119: INFO: Got endpoints: latency-svc-pk86n [776.584583ms] Sep 3 13:52:56.129: INFO: Created: latency-svc-p2d8d Sep 3 13:52:56.136: INFO: Created: latency-svc-97gn6 Sep 3 13:52:56.143: INFO: Got endpoints: latency-svc-854j4 [748.540833ms] Sep 3 13:52:56.151: INFO: Created: latency-svc-gp2d6 Sep 3 13:52:56.193: INFO: Got endpoints: latency-svc-nnjqq [669.635897ms] Sep 3 13:52:56.202: INFO: Created: latency-svc-wmk8x Sep 3 13:52:56.242: INFO: Got endpoints: latency-svc-4nvgc [718.55656ms] Sep 3 13:52:56.315: INFO: Got endpoints: latency-svc-zp4hn [696.683169ms] Sep 3 13:52:56.322: INFO: Created: latency-svc-clkbp Sep 3 13:52:56.420: INFO: Got endpoints: latency-svc-dfrc5 [801.110932ms] Sep 3 13:52:56.420: INFO: Got endpoints: latency-svc-sk5s7 [599.811801ms] Sep 3 13:52:56.428: INFO: Created: latency-svc-j4j8v Sep 3 13:52:56.434: INFO: Created: latency-svc-fcd68 Sep 3 13:52:56.439: INFO: Created: latency-svc-jdx7g Sep 3 13:52:56.441: INFO: Got endpoints: latency-svc-t2w6l [619.631352ms] Sep 3 13:52:56.518: INFO: Created: latency-svc-t5759 Sep 3 13:52:56.518: INFO: Got endpoints: latency-svc-8c44z [696.790652ms] Sep 3 13:52:56.620: INFO: Got endpoints: latency-svc-kwwzp [798.11428ms] Sep 3 13:52:56.620: INFO: Got endpoints: latency-svc-h9jtv [777.029191ms] Sep 3 13:52:56.717: INFO: Created: latency-svc-dkqrt Sep 3 13:52:56.717: INFO: Got endpoints: latency-svc-zkqlz [824.586502ms] Sep 3 13:52:56.717: INFO: Got endpoints: latency-svc-pml9k [774.625099ms] Sep 3 13:52:56.917: INFO: Got endpoints: latency-svc-fr6q6 [924.461315ms] Sep 3 13:52:56.917: INFO: Got endpoints: latency-svc-p2d8d [798.064355ms] Sep 3 13:52:56.921: INFO: Got endpoints: latency-svc-97gn6 [801.981947ms] Sep 3 13:52:56.921: INFO: Got endpoints: latency-svc-gp2d6 [778.575129ms] Sep 3 13:52:56.924: INFO: Created: latency-svc-lfb5q Sep 3 13:52:56.931: INFO: Created: latency-svc-t8hb6 Sep 3 13:52:56.942: INFO: Got endpoints: latency-svc-wmk8x [749.243893ms] Sep 3 13:52:56.945: INFO: Created: latency-svc-5qjc2 Sep 3 13:52:56.952: INFO: Created: latency-svc-hzcmc Sep 3 13:52:56.958: INFO: Created: latency-svc-pkkhw Sep 3 13:52:56.964: INFO: Created: latency-svc-frgnh Sep 3 13:52:56.971: INFO: Created: latency-svc-bsmzq Sep 3 13:52:56.977: INFO: Created: latency-svc-5v7j6 Sep 3 13:52:56.983: INFO: Created: latency-svc-nfc5c Sep 3 13:52:56.992: INFO: Got endpoints: latency-svc-clkbp [750.335679ms] Sep 3 13:52:57.002: INFO: Created: latency-svc-lr6m4 Sep 3 13:52:57.125: INFO: Got endpoints: latency-svc-j4j8v [809.714193ms] Sep 3 13:52:57.139: INFO: Created: latency-svc-t9ccn Sep 3 13:52:57.142: INFO: Got endpoints: latency-svc-fcd68 [722.392995ms] Sep 3 13:52:57.153: INFO: Created: latency-svc-h9n2x Sep 3 13:52:57.193: INFO: Got endpoints: latency-svc-jdx7g [772.539475ms] Sep 3 13:52:57.205: INFO: Created: latency-svc-72nzt Sep 3 13:52:57.242: INFO: Got endpoints: latency-svc-t5759 [800.730842ms] Sep 3 13:52:57.317: INFO: Got endpoints: latency-svc-dkqrt [798.604617ms] Sep 3 13:52:57.515: INFO: Created: latency-svc-9lb7b Sep 3 13:52:57.515: INFO: Got endpoints: latency-svc-lfb5q [895.213497ms] Sep 3 13:52:57.515: INFO: Got endpoints: latency-svc-t8hb6 [895.459769ms] Sep 3 13:52:57.519: INFO: Got endpoints: latency-svc-5qjc2 [801.962819ms] Sep 3 13:52:57.519: INFO: Got endpoints: latency-svc-hzcmc [801.59041ms] Sep 3 13:52:57.625: INFO: Created: latency-svc-2bqq5 Sep 3 13:52:57.625: INFO: Got endpoints: latency-svc-pkkhw [708.131449ms] Sep 3 13:52:57.625: INFO: Got endpoints: latency-svc-frgnh [708.319185ms] Sep 3 13:52:57.634: INFO: Created: latency-svc-wpbrs Sep 3 13:52:57.645: INFO: Created: latency-svc-j8nd8 Sep 3 13:52:57.645: INFO: Got endpoints: latency-svc-bsmzq [724.033036ms] Sep 3 13:52:57.651: INFO: Created: latency-svc-xjscl Sep 3 13:52:57.659: INFO: Created: latency-svc-65vvm Sep 3 13:52:57.665: INFO: Created: latency-svc-tvh4c Sep 3 13:52:57.718: INFO: Got endpoints: latency-svc-5v7j6 [796.587724ms] Sep 3 13:52:57.718: INFO: Created: latency-svc-mz4st Sep 3 13:52:57.824: INFO: Got endpoints: latency-svc-nfc5c [881.650352ms] Sep 3 13:52:57.824: INFO: Created: latency-svc-8bkb6 Sep 3 13:52:57.824: INFO: Got endpoints: latency-svc-lr6m4 [831.765789ms] Sep 3 13:52:57.833: INFO: Created: latency-svc-ms6bl Sep 3 13:52:57.844: INFO: Got endpoints: latency-svc-t9ccn [718.926228ms] Sep 3 13:52:57.844: INFO: Created: latency-svc-7frg7 Sep 3 13:52:57.859: INFO: Created: latency-svc-bfw7j Sep 3 13:52:57.867: INFO: Created: latency-svc-bfrpf Sep 3 13:52:57.893: INFO: Got endpoints: latency-svc-h9n2x [750.092964ms] Sep 3 13:52:57.904: INFO: Created: latency-svc-q856j Sep 3 13:52:57.943: INFO: Got endpoints: latency-svc-72nzt [750.010437ms] Sep 3 13:52:57.955: INFO: Created: latency-svc-ltcg4 Sep 3 13:52:57.993: INFO: Got endpoints: latency-svc-2bqq5 [675.297493ms] Sep 3 13:52:58.005: INFO: Created: latency-svc-6stbv Sep 3 13:52:58.122: INFO: Got endpoints: latency-svc-9lb7b [879.960595ms] Sep 3 13:52:58.122: INFO: Got endpoints: latency-svc-wpbrs [606.549933ms] Sep 3 13:52:58.217: INFO: Got endpoints: latency-svc-j8nd8 [701.599915ms] Sep 3 13:52:58.217: INFO: Got endpoints: latency-svc-xjscl [698.174069ms] Sep 3 13:52:58.317: INFO: Got endpoints: latency-svc-65vvm [798.004099ms] Sep 3 13:52:58.317: INFO: Got endpoints: latency-svc-tvh4c [691.890051ms] Sep 3 13:52:58.326: INFO: Created: latency-svc-6k9bg Sep 3 13:52:58.519: INFO: Got endpoints: latency-svc-mz4st [893.84077ms] Sep 3 13:52:58.520: INFO: Got endpoints: latency-svc-8bkb6 [874.884136ms] Sep 3 13:52:58.520: INFO: Got endpoints: latency-svc-ms6bl [801.715893ms] Sep 3 13:52:58.521: INFO: Got endpoints: latency-svc-7frg7 [697.158065ms] Sep 3 13:52:58.521: INFO: Created: latency-svc-jpm9r Sep 3 13:52:58.533: INFO: Created: latency-svc-vcxr2 Sep 3 13:52:58.543: INFO: Got endpoints: latency-svc-bfw7j [718.497305ms] Sep 3 13:52:58.546: INFO: Created: latency-svc-964ww Sep 3 13:52:58.554: INFO: Created: latency-svc-tbxbt Sep 3 13:52:58.562: INFO: Created: latency-svc-5gpgb Sep 3 13:52:58.568: INFO: Created: latency-svc-jqjpc Sep 3 13:52:58.576: INFO: Created: latency-svc-2dzvg Sep 3 13:52:58.584: INFO: Created: latency-svc-7898k Sep 3 13:52:58.591: INFO: Created: latency-svc-jmdnn Sep 3 13:52:58.592: INFO: Got endpoints: latency-svc-bfrpf [747.750216ms] Sep 3 13:52:58.599: INFO: Created: latency-svc-xx6pb Sep 3 13:52:58.606: INFO: Created: latency-svc-sb4qz Sep 3 13:52:58.727: INFO: Got endpoints: latency-svc-q856j [833.872363ms] Sep 3 13:52:58.727: INFO: Got endpoints: latency-svc-ltcg4 [783.792221ms] Sep 3 13:52:58.739: INFO: Created: latency-svc-rx85k Sep 3 13:52:58.741: INFO: Got endpoints: latency-svc-6stbv [748.648677ms] Sep 3 13:52:58.761: INFO: Created: latency-svc-4fcjm Sep 3 13:52:58.772: INFO: Created: latency-svc-2ww9k Sep 3 13:52:58.791: INFO: Got endpoints: latency-svc-6k9bg [669.615319ms] Sep 3 13:52:58.800: INFO: Created: latency-svc-6vhfr Sep 3 13:52:58.924: INFO: Got endpoints: latency-svc-jpm9r [801.748138ms] Sep 3 13:52:58.924: INFO: Got endpoints: latency-svc-vcxr2 [706.845026ms] Sep 3 13:52:59.016: INFO: Got endpoints: latency-svc-964ww [798.513106ms] Sep 3 13:52:59.016: INFO: Got endpoints: latency-svc-tbxbt [698.687846ms] Sep 3 13:52:59.221: INFO: Got endpoints: latency-svc-5gpgb [903.636767ms] Sep 3 13:52:59.221: INFO: Got endpoints: latency-svc-jqjpc [701.634636ms] Sep 3 13:52:59.227: INFO: Got endpoints: latency-svc-2dzvg [706.836784ms] Sep 3 13:52:59.228: INFO: Got endpoints: latency-svc-7898k [707.128402ms] Sep 3 13:52:59.233: INFO: Created: latency-svc-vt7d9 Sep 3 13:52:59.238: INFO: Created: latency-svc-x67wz Sep 3 13:52:59.242: INFO: Got endpoints: latency-svc-jmdnn [722.0296ms] Sep 3 13:52:59.244: INFO: Created: latency-svc-dmtr7 Sep 3 13:52:59.250: INFO: Created: latency-svc-2hcfq Sep 3 13:52:59.255: INFO: Created: latency-svc-dwrdf Sep 3 13:52:59.260: INFO: Created: latency-svc-5hp89 Sep 3 13:52:59.266: INFO: Created: latency-svc-sjfzd Sep 3 13:52:59.271: INFO: Created: latency-svc-96shz Sep 3 13:52:59.276: INFO: Created: latency-svc-6brmd Sep 3 13:52:59.292: INFO: Got endpoints: latency-svc-xx6pb [749.856294ms] Sep 3 13:52:59.301: INFO: Created: latency-svc-mmltg Sep 3 13:52:59.419: INFO: Got endpoints: latency-svc-sb4qz [826.778793ms] Sep 3 13:52:59.419: INFO: Got endpoints: latency-svc-rx85k [692.310895ms] Sep 3 13:52:59.522: INFO: Got endpoints: latency-svc-2ww9k [781.144844ms] Sep 3 13:52:59.523: INFO: Got endpoints: latency-svc-4fcjm [795.872705ms] Sep 3 13:52:59.526: INFO: Created: latency-svc-s4ldj Sep 3 13:52:59.616: INFO: Got endpoints: latency-svc-6vhfr [824.5121ms] Sep 3 13:52:59.616: INFO: Got endpoints: latency-svc-vt7d9 [692.474451ms] Sep 3 13:52:59.723: INFO: Got endpoints: latency-svc-x67wz [799.191842ms] Sep 3 13:52:59.723: INFO: Got endpoints: latency-svc-dmtr7 [707.514309ms] Sep 3 13:52:59.725: INFO: Created: latency-svc-wplbm Sep 3 13:52:59.736: INFO: Created: latency-svc-m2mxq Sep 3 13:52:59.742: INFO: Got endpoints: latency-svc-2hcfq [726.421493ms] Sep 3 13:52:59.748: INFO: Created: latency-svc-2wtmh Sep 3 13:52:59.754: INFO: Created: latency-svc-zz8h6 Sep 3 13:52:59.759: INFO: Created: latency-svc-9jmrw Sep 3 13:52:59.764: INFO: Created: latency-svc-rlrhm Sep 3 13:52:59.769: INFO: Created: latency-svc-82fzv Sep 3 13:52:59.773: INFO: Created: latency-svc-mb4q9 Sep 3 13:52:59.792: INFO: Got endpoints: latency-svc-dwrdf [571.097596ms] Sep 3 13:52:59.801: INFO: Created: latency-svc-zg9kq Sep 3 13:52:59.842: INFO: Got endpoints: latency-svc-5hp89 [620.802011ms] Sep 3 13:52:59.851: INFO: Created: latency-svc-8xk9n Sep 3 13:52:59.892: INFO: Got endpoints: latency-svc-sjfzd [665.50295ms] Sep 3 13:52:59.902: INFO: Created: latency-svc-676zj Sep 3 13:52:59.942: INFO: Got endpoints: latency-svc-96shz [713.982073ms] Sep 3 13:52:59.956: INFO: Created: latency-svc-2lzzm Sep 3 13:52:59.992: INFO: Got endpoints: latency-svc-6brmd [750.163656ms] Sep 3 13:53:00.002: INFO: Created: latency-svc-ghtk4 Sep 3 13:53:00.093: INFO: Got endpoints: latency-svc-mmltg [800.2062ms] Sep 3 13:53:00.103: INFO: Created: latency-svc-7cfbs Sep 3 13:53:00.315: INFO: Got endpoints: latency-svc-s4ldj [896.322725ms] Sep 3 13:53:00.316: INFO: Got endpoints: latency-svc-wplbm [896.663167ms] Sep 3 13:53:00.317: INFO: Got endpoints: latency-svc-m2mxq [794.020441ms] Sep 3 13:53:00.317: INFO: Got endpoints: latency-svc-2wtmh [794.155864ms] Sep 3 13:53:00.335: INFO: Created: latency-svc-xb6gd Sep 3 13:53:00.342: INFO: Got endpoints: latency-svc-zz8h6 [725.908306ms] Sep 3 13:53:00.342: INFO: Created: latency-svc-kwpwq Sep 3 13:53:00.349: INFO: Created: latency-svc-2hp9w Sep 3 13:53:00.355: INFO: Created: latency-svc-6t8zg Sep 3 13:53:00.362: INFO: Created: latency-svc-t7d8l Sep 3 13:53:00.392: INFO: Got endpoints: latency-svc-9jmrw [776.106781ms] Sep 3 13:53:00.718: INFO: Got endpoints: latency-svc-rlrhm [994.22739ms] Sep 3 13:53:00.718: INFO: Got endpoints: latency-svc-82fzv [994.480971ms] Sep 3 13:53:00.718: INFO: Got endpoints: latency-svc-mb4q9 [975.763518ms] Sep 3 13:53:00.718: INFO: Got endpoints: latency-svc-zg9kq [926.177709ms] Sep 3 13:53:00.720: INFO: Got endpoints: latency-svc-8xk9n [878.49357ms] Sep 3 13:53:00.723: INFO: Got endpoints: latency-svc-676zj [830.951741ms] Sep 3 13:53:00.822: INFO: Created: latency-svc-b9k6h Sep 3 13:53:00.822: INFO: Got endpoints: latency-svc-2lzzm [879.630097ms] Sep 3 13:53:00.822: INFO: Got endpoints: latency-svc-ghtk4 [829.638738ms] Sep 3 13:53:00.830: INFO: Created: latency-svc-xmhcn Sep 3 13:53:00.839: INFO: Created: latency-svc-zcvrp Sep 3 13:53:00.842: INFO: Got endpoints: latency-svc-7cfbs [749.508078ms] Sep 3 13:53:00.848: INFO: Created: latency-svc-p85sc Sep 3 13:53:00.855: INFO: Created: latency-svc-569st Sep 3 13:53:00.862: INFO: Created: latency-svc-pcxkv Sep 3 13:53:00.870: INFO: Created: latency-svc-l9rql Sep 3 13:53:00.876: INFO: Created: latency-svc-dpfkk Sep 3 13:53:00.886: INFO: Created: latency-svc-s78dg Sep 3 13:53:00.892: INFO: Got endpoints: latency-svc-xb6gd [576.818227ms] Sep 3 13:53:00.893: INFO: Created: latency-svc-wmrh8 Sep 3 13:53:00.901: INFO: Created: latency-svc-vs92j Sep 3 13:53:00.943: INFO: Got endpoints: latency-svc-kwpwq [626.734928ms] Sep 3 13:53:00.952: INFO: Created: latency-svc-fnct6 Sep 3 13:53:00.992: INFO: Got endpoints: latency-svc-6t8zg [675.491745ms] Sep 3 13:53:01.419: INFO: Got endpoints: latency-svc-2hp9w [1.102201267s] Sep 3 13:53:01.419: INFO: Got endpoints: latency-svc-t7d8l [1.076866151s] Sep 3 13:53:01.421: INFO: Got endpoints: latency-svc-b9k6h [1.028402505s] Sep 3 13:53:01.421: INFO: Got endpoints: latency-svc-xmhcn [702.761094ms] Sep 3 13:53:01.422: INFO: Got endpoints: latency-svc-zcvrp [702.904992ms] Sep 3 13:53:01.424: INFO: Got endpoints: latency-svc-p85sc [705.59495ms] Sep 3 13:53:01.425: INFO: Got endpoints: latency-svc-569st [705.892984ms] Sep 3 13:53:01.425: INFO: Got endpoints: latency-svc-pcxkv [704.343217ms] Sep 3 13:53:01.624: INFO: Got endpoints: latency-svc-dpfkk [802.429525ms] Sep 3 13:53:01.624: INFO: Got endpoints: latency-svc-l9rql [901.069503ms] Sep 3 13:53:01.625: INFO: Got endpoints: latency-svc-s78dg [802.673251ms] Sep 3 13:53:01.625: INFO: Got endpoints: latency-svc-wmrh8 [782.666626ms] Sep 3 13:53:01.626: INFO: Created: latency-svc-g7xdl Sep 3 13:53:01.720: INFO: Created: latency-svc-grr5d Sep 3 13:53:01.721: INFO: Got endpoints: latency-svc-vs92j [828.15317ms] Sep 3 13:53:01.721: INFO: Got endpoints: latency-svc-fnct6 [778.253059ms] Sep 3 13:53:01.820: INFO: Got endpoints: latency-svc-g7xdl [827.961551ms] Sep 3 13:53:01.820: INFO: Got endpoints: latency-svc-grr5d [401.527348ms] Sep 3 13:53:01.826: INFO: Created: latency-svc-6ctrr Sep 3 13:53:01.841: INFO: Created: latency-svc-lp4v5 Sep 3 13:53:01.843: INFO: Got endpoints: latency-svc-6ctrr [423.889919ms] Sep 3 13:53:01.850: INFO: Created: latency-svc-w9gmn Sep 3 13:53:01.917: INFO: Got endpoints: latency-svc-lp4v5 [496.114757ms] Sep 3 13:53:02.120: INFO: Got endpoints: latency-svc-w9gmn [698.917407ms] Sep 3 13:53:02.219: INFO: Created: latency-svc-2fkcb Sep 3 13:53:02.232: INFO: Got endpoints: latency-svc-2fkcb [809.979964ms] Sep 3 13:53:02.324: INFO: Created: latency-svc-xxwfb Sep 3 13:53:02.327: INFO: Got endpoints: latency-svc-xxwfb [902.899181ms] Sep 3 13:53:02.335: INFO: Created: latency-svc-qk2p8 Sep 3 13:53:02.339: INFO: Got endpoints: latency-svc-qk2p8 [914.442901ms] Sep 3 13:53:02.342: INFO: Created: latency-svc-tm5pd Sep 3 13:53:02.345: INFO: Got endpoints: latency-svc-tm5pd [920.558162ms] Sep 3 13:53:02.349: INFO: Created: latency-svc-zj2bc Sep 3 13:53:02.351: INFO: Got endpoints: latency-svc-zj2bc [726.541399ms] Sep 3 13:53:02.355: INFO: Created: latency-svc-rh98z Sep 3 13:53:02.357: INFO: Got endpoints: latency-svc-rh98z [732.80386ms] Sep 3 13:53:02.361: INFO: Created: latency-svc-ncfsl Sep 3 13:53:02.367: INFO: Got endpoints: latency-svc-ncfsl [742.097528ms] Sep 3 13:53:02.373: INFO: Created: latency-svc-2px8p Sep 3 13:53:02.375: INFO: Got endpoints: latency-svc-2px8p [749.991011ms] Sep 3 13:53:02.378: INFO: Created: latency-svc-fqqhk Sep 3 13:53:02.383: INFO: Created: latency-svc-bqmzk Sep 3 13:53:02.388: INFO: Created: latency-svc-8xm9l Sep 3 13:53:02.393: INFO: Got endpoints: latency-svc-fqqhk [672.178526ms] Sep 3 13:53:02.394: INFO: Created: latency-svc-f4hxq Sep 3 13:53:02.399: INFO: Created: latency-svc-phsv4 Sep 3 13:53:02.405: INFO: Created: latency-svc-9tjnn Sep 3 13:53:02.410: INFO: Created: latency-svc-v9d78 Sep 3 13:53:02.530: INFO: Created: latency-svc-lpwfh Sep 3 13:53:02.530: INFO: Got endpoints: latency-svc-bqmzk [809.319479ms] Sep 3 13:53:02.530: INFO: Got endpoints: latency-svc-8xm9l [710.04431ms] Sep 3 13:53:02.547: INFO: Got endpoints: latency-svc-f4hxq [726.016576ms] Sep 3 13:53:02.558: INFO: Created: latency-svc-95nbk Sep 3 13:53:02.566: INFO: Created: latency-svc-dhxht Sep 3 13:53:02.570: INFO: Created: latency-svc-4jwjw Sep 3 13:53:02.575: INFO: Created: latency-svc-mgn8v Sep 3 13:53:02.579: INFO: Created: latency-svc-nsmcs Sep 3 13:53:02.585: INFO: Created: latency-svc-nb97l Sep 3 13:53:02.590: INFO: Created: latency-svc-76brt Sep 3 13:53:02.592: INFO: Got endpoints: latency-svc-phsv4 [748.376169ms] Sep 3 13:53:02.595: INFO: Created: latency-svc-cnzw8 Sep 3 13:53:02.604: INFO: Created: latency-svc-cwz4c Sep 3 13:53:02.611: INFO: Created: latency-svc-qkc5d Sep 3 13:53:02.616: INFO: Created: latency-svc-m9668 Sep 3 13:53:02.622: INFO: Created: latency-svc-xq6kv Sep 3 13:53:02.642: INFO: Got endpoints: latency-svc-9tjnn [724.48751ms] Sep 3 13:53:02.656: INFO: Created: latency-svc-z86kj Sep 3 13:53:02.691: INFO: Got endpoints: latency-svc-lpwfh [459.586481ms] Sep 3 13:53:02.700: INFO: Created: latency-svc-7nl89 Sep 3 13:53:02.742: INFO: Got endpoints: latency-svc-v9d78 [621.87854ms] Sep 3 13:53:02.751: INFO: Created: latency-svc-26wp7 Sep 3 13:53:02.792: INFO: Got endpoints: latency-svc-95nbk [464.312977ms] Sep 3 13:53:02.802: INFO: Created: latency-svc-d97k4 Sep 3 13:53:02.849: INFO: Got endpoints: latency-svc-dhxht [510.166549ms] Sep 3 13:53:02.860: INFO: Created: latency-svc-99hcg Sep 3 13:53:02.893: INFO: Got endpoints: latency-svc-4jwjw [547.09762ms] Sep 3 13:53:02.942: INFO: Got endpoints: latency-svc-mgn8v [590.977822ms] Sep 3 13:53:02.992: INFO: Got endpoints: latency-svc-nsmcs [634.686683ms] Sep 3 13:53:03.042: INFO: Got endpoints: latency-svc-nb97l [675.162835ms] Sep 3 13:53:03.092: INFO: Got endpoints: latency-svc-76brt [716.581303ms] Sep 3 13:53:03.142: INFO: Got endpoints: latency-svc-cnzw8 [749.059073ms] Sep 3 13:53:03.192: INFO: Got endpoints: latency-svc-cwz4c [661.5478ms] Sep 3 13:53:03.242: INFO: Got endpoints: latency-svc-qkc5d [711.805321ms] Sep 3 13:53:03.292: INFO: Got endpoints: latency-svc-m9668 [745.531063ms] Sep 3 13:53:03.342: INFO: Got endpoints: latency-svc-xq6kv [750.139313ms] Sep 3 13:53:03.392: INFO: Got endpoints: latency-svc-z86kj [750.108618ms] Sep 3 13:53:03.443: INFO: Got endpoints: latency-svc-7nl89 [751.568326ms] Sep 3 13:53:03.493: INFO: Got endpoints: latency-svc-26wp7 [750.173216ms] Sep 3 13:53:03.542: INFO: Got endpoints: latency-svc-d97k4 [749.996475ms] Sep 3 13:53:03.617: INFO: Got endpoints: latency-svc-99hcg [767.82044ms] Sep 3 13:53:03.617: INFO: Latencies: [12.176122ms 17.81844ms 22.376074ms 28.281428ms 31.518463ms 41.710954ms 46.976796ms 52.940243ms 58.288988ms 64.549962ms 70.488467ms 93.023564ms 101.760251ms 109.908918ms 112.082013ms 113.070353ms 115.340427ms 117.425333ms 118.496235ms 120.140601ms 122.666631ms 135.80091ms 162.911233ms 206.046943ms 206.655151ms 207.840618ms 213.59484ms 213.726415ms 214.858544ms 214.919367ms 215.616092ms 218.181689ms 220.540716ms 220.623986ms 234.375551ms 234.648784ms 234.790497ms 238.287276ms 280.215498ms 286.442367ms 361.158084ms 366.075135ms 401.527348ms 423.889919ms 459.586481ms 464.312977ms 496.114757ms 510.166549ms 540.237634ms 545.697993ms 547.09762ms 549.476278ms 553.40717ms 553.927825ms 571.097596ms 576.818227ms 590.977822ms 596.09674ms 599.811801ms 606.549933ms 619.631352ms 620.802011ms 621.87854ms 626.734928ms 634.686683ms 638.701676ms 661.5478ms 665.50295ms 669.615319ms 669.635897ms 672.178526ms 675.162835ms 675.297493ms 675.491745ms 691.890051ms 692.310895ms 692.347005ms 692.474451ms 696.683169ms 696.790652ms 697.158065ms 698.174069ms 698.687846ms 698.917407ms 701.599915ms 701.634636ms 702.761094ms 702.904992ms 704.343217ms 705.59495ms 705.892984ms 706.836784ms 706.845026ms 707.128402ms 707.514309ms 708.131449ms 708.319185ms 710.04431ms 711.805321ms 713.982073ms 716.581303ms 718.497305ms 718.55656ms 718.926228ms 722.0296ms 722.392995ms 724.033036ms 724.48751ms 725.908306ms 726.016576ms 726.421493ms 726.541399ms 732.80386ms 742.097528ms 745.531063ms 747.750216ms 748.376169ms 748.540833ms 748.648677ms 749.059073ms 749.243893ms 749.508078ms 749.856294ms 749.991011ms 749.996475ms 750.010437ms 750.092964ms 750.108618ms 750.139313ms 750.163656ms 750.173216ms 750.335679ms 751.568326ms 767.82044ms 772.539475ms 774.625099ms 776.106781ms 776.584583ms 777.029191ms 778.253059ms 778.575129ms 781.144844ms 782.666626ms 783.792221ms 794.020441ms 794.155864ms 795.872705ms 796.587724ms 798.004099ms 798.064355ms 798.11428ms 798.513106ms 798.604617ms 799.191842ms 800.2062ms 800.730842ms 801.110932ms 801.59041ms 801.715893ms 801.748138ms 801.962819ms 801.981947ms 802.429525ms 802.673251ms 808.882932ms 809.319479ms 809.714193ms 809.979964ms 824.5121ms 824.586502ms 826.778793ms 827.961551ms 828.15317ms 829.638738ms 830.951741ms 831.765789ms 833.872363ms 874.884136ms 878.49357ms 879.630097ms 879.960595ms 881.650352ms 893.84077ms 895.213497ms 895.459769ms 896.322725ms 896.663167ms 901.069503ms 902.899181ms 903.636767ms 914.442901ms 920.558162ms 924.461315ms 926.177709ms 975.763518ms 994.22739ms 994.480971ms 1.028402505s 1.076866151s 1.102201267s] Sep 3 13:53:03.618: INFO: 50 %ile: 716.581303ms Sep 3 13:53:03.618: INFO: 90 %ile: 879.960595ms Sep 3 13:53:03.618: INFO: 99 %ile: 1.076866151s Sep 3 13:53:03.618: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:03.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1268" for this suite. • [SLOW TEST:9.904 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":12,"skipped":86,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:01.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7048710e-fab5-4352-ac3d-cb56e402d4ec STEP: Creating a pod to test consume configMaps Sep 3 13:53:01.845: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54" in namespace "projected-3041" to be "Succeeded or Failed" Sep 3 13:53:01.850: INFO: Pod "pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54": Phase="Pending", Reason="", readiness=false. Elapsed: 5.166984ms Sep 3 13:53:03.854: INFO: Pod "pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009039947s STEP: Saw pod success Sep 3 13:53:03.854: INFO: Pod "pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54" satisfied condition "Succeeded or Failed" Sep 3 13:53:03.856: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54 container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:53:03.868: INFO: Waiting for pod pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54 to disappear Sep 3 13:53:03.870: INFO: Pod pod-projected-configmaps-6a885ac7-672b-4b33-9f32-165b3f6acf54 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:03.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3041" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":483,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:02.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 3 13:53:04.360: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:04.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8551" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":193,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:59.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:52:59.748: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"414317fb-1216-40f5-a53f-dee19ad497a1", Controller:(*bool)(0xc002d26c52), BlockOwnerDeletion:(*bool)(0xc002d26c53)}} Sep 3 13:52:59.753: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6674f859-84cb-40d3-a5fc-38fc72e0f097", Controller:(*bool)(0xc001792eea), BlockOwnerDeletion:(*bool)(0xc001792eeb)}} Sep 3 13:52:59.757: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"894fe614-f002-4cb4-bb7a-d0b1602e8fde", Controller:(*bool)(0xc002d2713a), BlockOwnerDeletion:(*bool)(0xc002d2713b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:04.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9970" for this suite. • [SLOW TEST:5.437 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":5,"skipped":129,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:03.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-70d0b959-3f35-48ba-95b4-2e7223780e7c STEP: Creating a pod to test consume secrets Sep 3 13:53:03.765: INFO: Waiting up to 5m0s for pod "pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15" in namespace "secrets-5136" to be "Succeeded or Failed" Sep 3 13:53:03.767: INFO: Pod "pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315552ms Sep 3 13:53:05.771: INFO: Pod "pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005859523s Sep 3 13:53:07.775: INFO: Pod "pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01054458s STEP: Saw pod success Sep 3 13:53:07.775: INFO: Pod "pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15" satisfied condition "Succeeded or Failed" Sep 3 13:53:07.778: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15 container secret-volume-test: STEP: delete the pod Sep 3 13:53:07.791: INFO: Waiting for pod pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15 to disappear Sep 3 13:53:07.794: INFO: Pod pod-secrets-f59cc79d-bbc4-4a68-8f6f-79463c40af15 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:07.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5136" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":90,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:04.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-bf3fc2d1-666c-417a-ab7a-9cb9d67b856f STEP: Creating a pod to test consume secrets Sep 3 13:53:04.828: INFO: Waiting up to 5m0s for pod "pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c" in namespace "secrets-3035" to be "Succeeded or Failed" Sep 3 13:53:04.831: INFO: Pod "pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.01257ms Sep 3 13:53:06.835: INFO: Pod "pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007509317s Sep 3 13:53:08.838: INFO: Pod "pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010557099s Sep 3 13:53:10.843: INFO: Pod "pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015438562s STEP: Saw pod success Sep 3 13:53:10.843: INFO: Pod "pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c" satisfied condition "Succeeded or Failed" Sep 3 13:53:10.846: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c container secret-env-test: STEP: delete the pod Sep 3 13:53:10.860: INFO: Waiting for pod pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c to disappear Sep 3 13:53:10.862: INFO: Pod pod-secrets-38ad4175-c8e8-46c2-8283-b43fa704ef1c no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:10.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3035" for this suite. • [SLOW TEST:6.082 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:03.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:53:04.275: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:53:06.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:53:08.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273984, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:53:11.330: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Sep 3 13:53:12.330: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Sep 3 13:53:13.330: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Sep 3 13:53:14.330: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:14.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5564" for this suite. STEP: Destroying namespace "webhook-5564-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.607 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":23,"skipped":504,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:10.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-cf4efc5b-0ec2-4161-870d-e4ba77481b0c STEP: Creating a pod to test consume secrets Sep 3 13:53:10.954: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8" in namespace "projected-524" to be "Succeeded or Failed" Sep 3 13:53:10.957: INFO: Pod "pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490234ms Sep 3 13:53:12.960: INFO: Pod "pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00561409s Sep 3 13:53:14.963: INFO: Pod "pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008800886s STEP: Saw pod success Sep 3 13:53:14.963: INFO: Pod "pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8" satisfied condition "Succeeded or Failed" Sep 3 13:53:14.966: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8 container secret-volume-test: STEP: delete the pod Sep 3 13:53:14.980: INFO: Waiting for pod pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8 to disappear Sep 3 13:53:14.982: INFO: Pod pod-projected-secrets-da4d6554-2e9d-4537-9616-66286fddcaf8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:14.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-524" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:07.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 3 13:53:13.893: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 3 13:53:13.896: INFO: Pod pod-with-poststart-exec-hook still exists Sep 3 13:53:15.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 3 13:53:15.900: INFO: Pod pod-with-poststart-exec-hook still exists Sep 3 13:53:17.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 3 13:53:17.900: INFO: Pod pod-with-poststart-exec-hook still exists Sep 3 13:53:19.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 3 13:53:19.900: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:19.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3327" for this suite. • [SLOW TEST:12.087 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:14.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:53:15.615: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:53:17.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:53:19.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:53:21.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766273995, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:53:24.642: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:24.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6973" for this suite. STEP: Destroying namespace "webhook-6973-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.291 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":24,"skipped":511,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:04.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6262 STEP: creating service affinity-clusterip-transition in namespace services-6262 STEP: creating replication controller affinity-clusterip-transition in namespace services-6262 I0903 13:53:04.417129 30 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6262, replica count: 3 I0903 13:53:07.467661 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0903 13:53:10.467950 30 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:53:10.474: INFO: Creating new exec pod Sep 3 13:53:13.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6262 exec execpod-affinitytg9d4 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 3 13:53:13.762: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Sep 3 13:53:13.762: INFO: stdout: "" Sep 3 13:53:13.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6262 exec execpod-affinitytg9d4 -- /bin/sh -x -c nc -zv -t -w 2 10.142.156.189 80' Sep 3 13:53:14.008: INFO: stderr: "+ nc -zv -t -w 2 10.142.156.189 80\nConnection to 10.142.156.189 80 port [tcp/http] succeeded!\n" Sep 3 13:53:14.009: INFO: stdout: "" Sep 3 13:53:14.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6262 exec execpod-affinitytg9d4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.156.189:80/ ; done' Sep 3 13:53:14.407: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n" Sep 3 13:53:14.407: INFO: stdout: "\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-rn9qk\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-rn9qk\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-xhscq\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-rn9qk\naffinity-clusterip-transition-xhscq" Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-rn9qk Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-rn9qk Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.407: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.408: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.408: INFO: Received response from host: affinity-clusterip-transition-rn9qk Sep 3 13:53:14.408: INFO: Received response from host: affinity-clusterip-transition-xhscq Sep 3 13:53:14.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6262 exec execpod-affinitytg9d4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.156.189:80/ ; done' Sep 3 13:53:14.762: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.156.189:80/\n" Sep 3 13:53:14.762: INFO: stdout: "\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7\naffinity-clusterip-transition-q9qf7" Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.762: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.763: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.763: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.763: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.763: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.763: INFO: Received response from host: affinity-clusterip-transition-q9qf7 Sep 3 13:53:14.763: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6262, will wait for the garbage collector to delete the pods Sep 3 13:53:14.829: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.589971ms Sep 3 13:53:20.129: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 5.30019284s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:33.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6262" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.371 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":195,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:33.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-97be6a56-62f9-46c2-b012-179a01c98daa STEP: Creating a pod to test consume configMaps Sep 3 13:53:33.799: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a" in namespace "projected-3030" to be "Succeeded or Failed" Sep 3 13:53:33.802: INFO: Pod "pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.973108ms Sep 3 13:53:35.806: INFO: Pod "pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006822413s Sep 3 13:53:37.811: INFO: Pod "pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011162693s STEP: Saw pod success Sep 3 13:53:37.811: INFO: Pod "pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a" satisfied condition "Succeeded or Failed" Sep 3 13:53:37.814: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:53:37.830: INFO: Waiting for pod pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a to disappear Sep 3 13:53:37.833: INFO: Pod pod-projected-configmaps-cfd2ac7b-d10b-4b71-a5d9-f39a22d5215a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:37.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3030" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":195,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:37.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:53:37.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623" in namespace "projected-3989" to be "Succeeded or Failed" Sep 3 13:53:37.895: INFO: Pod "downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172738ms Sep 3 13:53:39.899: INFO: Pod "downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623": Phase="Running", Reason="", readiness=true. Elapsed: 2.006840414s Sep 3 13:53:41.903: INFO: Pod "downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010500066s STEP: Saw pod success Sep 3 13:53:41.903: INFO: Pod "downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623" satisfied condition "Succeeded or Failed" Sep 3 13:53:41.906: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623 container client-container: STEP: delete the pod Sep 3 13:53:41.922: INFO: Waiting for pod downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623 to disappear Sep 3 13:53:41.925: INFO: Pod downwardapi-volume-090f5cdd-9740-430a-bb56-65ffca308623 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:41.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3989" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:41.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:42.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8875" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":14,"skipped":223,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:20.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-746s STEP: Creating a pod to test atomic-volume-subpath Sep 3 13:53:20.058: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-746s" in namespace "subpath-6936" to be "Succeeded or Failed" Sep 3 13:53:20.060: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516384ms Sep 3 13:53:22.064: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006467636s Sep 3 13:53:24.068: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010337078s Sep 3 13:53:26.072: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 6.01413765s Sep 3 13:53:28.076: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 8.017955403s Sep 3 13:53:30.079: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 10.021316085s Sep 3 13:53:32.083: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 12.025128095s Sep 3 13:53:34.086: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 14.028554639s Sep 3 13:53:36.090: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 16.032635919s Sep 3 13:53:38.094: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 18.036519296s Sep 3 13:53:40.098: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Running", Reason="", readiness=true. Elapsed: 20.040551139s Sep 3 13:53:42.102: INFO: Pod "pod-subpath-test-configmap-746s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.044074192s STEP: Saw pod success Sep 3 13:53:42.102: INFO: Pod "pod-subpath-test-configmap-746s" satisfied condition "Succeeded or Failed" Sep 3 13:53:42.105: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-subpath-test-configmap-746s container test-container-subpath-configmap-746s: STEP: delete the pod Sep 3 13:53:42.119: INFO: Waiting for pod pod-subpath-test-configmap-746s to disappear Sep 3 13:53:42.122: INFO: Pod pod-subpath-test-configmap-746s no longer exists STEP: Deleting pod pod-subpath-test-configmap-746s Sep 3 13:53:42.122: INFO: Deleting pod "pod-subpath-test-configmap-746s" in namespace "subpath-6936" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:42.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6936" for this suite. • [SLOW TEST:22.118 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:42.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-190437a7-0505-407d-bd44-905addcc2ffd STEP: Creating a pod to test consume configMaps Sep 3 13:53:42.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c" in namespace "configmap-7881" to be "Succeeded or Failed" Sep 3 13:53:42.133: INFO: Pod "pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627809ms Sep 3 13:53:44.137: INFO: Pod "pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006331862s STEP: Saw pod success Sep 3 13:53:44.137: INFO: Pod "pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c" satisfied condition "Succeeded or Failed" Sep 3 13:53:44.140: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c container configmap-volume-test: STEP: delete the pod Sep 3 13:53:44.155: INFO: Waiting for pod pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c to disappear Sep 3 13:53:44.158: INFO: Pod pod-configmaps-4301f097-1b67-4bc5-bc0a-3f505f08565c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:44.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7881" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:36.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0903 13:52:42.955582 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 3 13:53:45.121: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:45.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3603" for this suite. • [SLOW TEST:68.238 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":19,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:45.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 3 13:53:45.218: INFO: created test-pod-1 Sep 3 13:53:45.222: INFO: created test-pod-2 Sep 3 13:53:45.226: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:45.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3618" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":20,"skipped":495,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:45.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 3 13:53:45.314: INFO: created test-podtemplate-1 Sep 3 13:53:45.317: INFO: created test-podtemplate-2 Sep 3 13:53:45.417: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 3 13:53:45.420: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 3 13:53:45.623: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9285" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":21,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:35.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5007fd6e-3447-4881-9dbf-dfc2161ec922 STEP: Creating secret with name s-test-opt-upd-2fa64bf3-0fe7-4bb2-8692-b36c5574cba6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5007fd6e-3447-4881-9dbf-dfc2161ec922 STEP: Updating secret s-test-opt-upd-2fa64bf3-0fe7-4bb2-8692-b36c5574cba6 STEP: Creating secret with name s-test-opt-create-712cf81e-ea86-403d-bc59-7f01e01db8a2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2698" for this suite. • [SLOW TEST:76.701 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":371,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:45.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5372 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5372 I0903 13:53:46.057805 32 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5372, replica count: 2 I0903 13:53:49.108376 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:53:49.108: INFO: Creating new exec pod Sep 3 13:53:52.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5372 exec execpodthwld -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 3 13:53:52.396: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Sep 3 13:53:52.396: INFO: stdout: "" Sep 3 13:53:52.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5372 exec execpodthwld -- /bin/sh -x -c nc -zv -t -w 2 10.132.44.187 80' Sep 3 13:53:52.639: INFO: stderr: "+ nc -zv -t -w 2 10.132.44.187 80\nConnection to 10.132.44.187 80 port [tcp/http] succeeded!\n" Sep 3 13:53:52.639: INFO: stdout: "" Sep 3 13:53:52.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5372 exec execpodthwld -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 30302' Sep 3 13:53:52.880: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.9 30302\nConnection to 172.18.0.9 30302 port [tcp/30302] succeeded!\n" Sep 3 13:53:52.880: INFO: stdout: "" Sep 3 13:53:52.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5372 exec execpodthwld -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 30302' Sep 3 13:53:53.111: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 30302\nConnection to 172.18.0.10 30302 port [tcp/30302] succeeded!\n" Sep 3 13:53:53.111: INFO: stdout: "" Sep 3 13:53:53.111: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:53:53.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5372" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:7.372 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":22,"skipped":526,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:49:59.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-456469f9-ed62-405d-8335-46603cd9a2be in namespace container-probe-3933 Sep 3 13:50:01.864: INFO: Started pod test-webserver-456469f9-ed62-405d-8335-46603cd9a2be in namespace container-probe-3933 STEP: checking the pod's current state and verifying that restartCount is present Sep 3 13:50:01.866: INFO: Initial restart count of pod test-webserver-456469f9-ed62-405d-8335-46603cd9a2be is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:02.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3933" for this suite. • [SLOW TEST:242.409 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:24.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2447 Sep 3 13:53:26.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2447 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 3 13:53:27.194: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Sep 3 13:53:27.194: INFO: stdout: "iptables" Sep 3 13:53:27.194: INFO: proxyMode: iptables Sep 3 13:53:27.200: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:53:27.204: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:53:29.204: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:53:29.208: INFO: Pod kube-proxy-mode-detector still exists Sep 3 13:53:31.204: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 3 13:53:31.208: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2447 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2447 I0903 13:53:31.226180 21 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2447, replica count: 3 I0903 13:53:34.276718 21 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:53:34.283: INFO: Creating new exec pod Sep 3 13:53:37.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2447 exec execpod-affinitysdgml -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 3 13:53:37.562: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Sep 3 13:53:37.562: INFO: stdout: "" Sep 3 13:53:37.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2447 exec execpod-affinitysdgml -- /bin/sh -x -c nc -zv -t -w 2 10.131.167.64 80' Sep 3 13:53:37.804: INFO: stderr: "+ nc -zv -t -w 2 10.131.167.64 80\nConnection to 10.131.167.64 80 port [tcp/http] succeeded!\n" Sep 3 13:53:37.804: INFO: stdout: "" Sep 3 13:53:37.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2447 exec execpod-affinitysdgml -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.131.167.64:80/ ; done' Sep 3 13:53:38.135: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n" Sep 3 13:53:38.135: INFO: stdout: "\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b\naffinity-clusterip-timeout-6tv6b" Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Received response from host: affinity-clusterip-timeout-6tv6b Sep 3 13:53:38.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2447 exec execpod-affinitysdgml -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.131.167.64:80/' Sep 3 13:53:38.381: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n" Sep 3 13:53:38.381: INFO: stdout: "affinity-clusterip-timeout-6tv6b" Sep 3 13:53:53.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2447 exec execpod-affinitysdgml -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.131.167.64:80/' Sep 3 13:53:53.605: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.131.167.64:80/\n" Sep 3 13:53:53.605: INFO: stdout: "affinity-clusterip-timeout-jhvtt" Sep 3 13:53:53.605: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2447, will wait for the garbage collector to delete the pods Sep 3 13:53:53.671: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.649587ms Sep 3 13:53:53.771: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.128132ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:03.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2447" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:38.960 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:02.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:54:02.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47" in namespace "downward-api-9912" to be "Succeeded or Failed" Sep 3 13:54:02.360: INFO: Pod "downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.001995ms Sep 3 13:54:04.364: INFO: Pod "downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00683276s STEP: Saw pod success Sep 3 13:54:04.364: INFO: Pod "downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47" satisfied condition "Succeeded or Failed" Sep 3 13:54:04.367: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47 container client-container: STEP: delete the pod Sep 3 13:54:04.381: INFO: Waiting for pod downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47 to disappear Sep 3 13:54:04.384: INFO: Pod downwardapi-volume-85b977c5-8ef1-42e6-8b26-033967930c47 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:04.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9912" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:42.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3888 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 3 13:53:42.236: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 3 13:53:42.259: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:53:44.262: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:53:46.263: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:53:48.263: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:53:50.262: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:53:52.262: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:53:54.263: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:53:56.263: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 3 13:53:56.270: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 3 13:53:58.274: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 3 13:54:00.274: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 3 13:54:02.274: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 3 13:54:04.273: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 3 13:54:06.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.94:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3888 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:54:06.302: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:54:06.424: INFO: Found all expected endpoints: [netserver-0] Sep 3 13:54:06.428: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.162:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3888 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:54:06.428: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:54:06.559: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:06.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3888" for this suite. • [SLOW TEST:24.363 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":194,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:06.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:06.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8405" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":17,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:53.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Sep 3 13:53:53.831: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:53:53.846: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:53:55.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274033, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274033, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274033, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274033, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:53:58.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API Sep 3 13:54:08.892: INFO: Waiting for webhook configuration to be ready... STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:09.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4278" for this suite. STEP: Destroying namespace "webhook-4278-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.901 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":23,"skipped":542,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:52.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:53:52.828: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:53:54.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274032, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274032, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274032, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274032, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:53:57.921: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:53:57.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API Sep 3 13:54:08.531: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:09.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4271" for this suite. STEP: Destroying namespace "webhook-4271-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.233 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":20,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:09.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1385 STEP: creating an pod Sep 3 13:54:09.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 3 13:54:09.242: INFO: stderr: "" Sep 3 13:54:09.242: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 3 13:54:09.242: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 3 13:54:09.242: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7436" to be "running and ready, or succeeded" Sep 3 13:54:09.245: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841744ms Sep 3 13:54:11.249: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.006945979s Sep 3 13:54:11.250: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 3 13:54:11.250: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 3 13:54:11.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 logs logs-generator logs-generator' Sep 3 13:54:11.383: INFO: stderr: "" Sep 3 13:54:11.383: INFO: stdout: "I0903 13:54:10.050809 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/ndpw 265\nI0903 13:54:10.251060 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/hv7t 516\nI0903 13:54:10.451096 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/vfl9 230\nI0903 13:54:10.651090 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/47z6 463\nI0903 13:54:10.851046 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/89qr 502\nI0903 13:54:11.051066 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/8pq 589\nI0903 13:54:11.251087 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/kfgq 409\n" STEP: limiting log lines Sep 3 13:54:11.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 logs logs-generator logs-generator --tail=1' Sep 3 13:54:11.509: INFO: stderr: "" Sep 3 13:54:11.509: INFO: stdout: "I0903 13:54:11.450888 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/7zfr 279\n" Sep 3 13:54:11.509: INFO: got output "I0903 13:54:11.450888 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/7zfr 279\n" STEP: limiting log bytes Sep 3 13:54:11.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 logs logs-generator logs-generator --limit-bytes=1' Sep 3 13:54:11.637: INFO: stderr: "" Sep 3 13:54:11.637: INFO: stdout: "I" Sep 3 13:54:11.637: INFO: got output "I" STEP: exposing timestamps Sep 3 13:54:11.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 logs logs-generator logs-generator --tail=1 --timestamps' Sep 3 13:54:11.783: INFO: stderr: "" Sep 3 13:54:11.783: INFO: stdout: "2021-09-03T13:54:11.651292309Z I0903 13:54:11.651054 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/hfnr 577\n" Sep 3 13:54:11.783: INFO: got output "2021-09-03T13:54:11.651292309Z I0903 13:54:11.651054 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/hfnr 577\n" STEP: restricting to a time range Sep 3 13:54:14.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 logs logs-generator logs-generator --since=1s' Sep 3 13:54:14.423: INFO: stderr: "" Sep 3 13:54:14.423: INFO: stdout: "I0903 13:54:13.451050 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/jzdn 278\nI0903 13:54:13.651029 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/l8ps 521\nI0903 13:54:13.851090 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/522g 422\nI0903 13:54:14.051001 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/rns7 492\nI0903 13:54:14.251009 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/f2d 570\n" Sep 3 13:54:14.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 logs logs-generator logs-generator --since=24h' Sep 3 13:54:14.555: INFO: stderr: "" Sep 3 13:54:14.555: INFO: stdout: "I0903 13:54:10.050809 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/ndpw 265\nI0903 13:54:10.251060 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/hv7t 516\nI0903 13:54:10.451096 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/vfl9 230\nI0903 13:54:10.651090 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/47z6 463\nI0903 13:54:10.851046 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/89qr 502\nI0903 13:54:11.051066 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/8pq 589\nI0903 13:54:11.251087 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/kfgq 409\nI0903 13:54:11.450888 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/7zfr 279\nI0903 13:54:11.651054 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/hfnr 577\nI0903 13:54:11.851005 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/x67 264\nI0903 13:54:12.051053 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/9t7 497\nI0903 13:54:12.251081 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/8gdx 256\nI0903 13:54:12.451029 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/6jj6 480\nI0903 13:54:12.651063 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/njhx 548\nI0903 13:54:12.850998 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/jx2 239\nI0903 13:54:13.051057 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/hgwx 572\nI0903 13:54:13.251098 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/rm5d 504\nI0903 13:54:13.451050 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/jzdn 278\nI0903 13:54:13.651029 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/l8ps 521\nI0903 13:54:13.851090 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/522g 422\nI0903 13:54:14.051001 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/rns7 492\nI0903 13:54:14.251009 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/f2d 570\nI0903 13:54:14.451053 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/mstt 547\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1390 Sep 3 13:54:14.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7436 delete pod logs-generator' Sep 3 13:54:16.156: INFO: stderr: "" Sep 3 13:54:16.156: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:16.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7436" for this suite. • [SLOW TEST:7.088 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":24,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:16.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-862/configmap-test-9e08d4d3-bc5b-46b5-8ee1-50d024aa137a STEP: Creating a pod to test consume configMaps Sep 3 13:54:16.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d" in namespace "configmap-862" to be "Succeeded or Failed" Sep 3 13:54:16.280: INFO: Pod "pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.057753ms Sep 3 13:54:18.284: INFO: Pod "pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d": Phase="Running", Reason="", readiness=true. Elapsed: 2.00740869s Sep 3 13:54:20.288: INFO: Pod "pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011420576s STEP: Saw pod success Sep 3 13:54:20.288: INFO: Pod "pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d" satisfied condition "Succeeded or Failed" Sep 3 13:54:20.292: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d container env-test: STEP: delete the pod Sep 3 13:54:20.307: INFO: Waiting for pod pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d to disappear Sep 3 13:54:20.310: INFO: Pod pod-configmaps-71f85ca3-817b-40b7-bbe6-aa347e37d85d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:20.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-862" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":581,"failed":0} [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:20.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 3 13:54:22.373: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:22.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7189" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:21.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:54:21.563: INFO: Deleting pod "var-expansion-58f18488-02f5-41a4-9a38-437a8b3f2209" in namespace "var-expansion-7237" Sep 3 13:54:21.567: INFO: Wait up to 5m0s for pod "var-expansion-58f18488-02f5-41a4-9a38-437a8b3f2209" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:23.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7237" for this suite. • [SLOW TEST:122.058 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":16,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:06.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:23.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8108" for this suite. • [SLOW TEST:17.080 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":18,"skipped":268,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:23.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-e09d81c8-faa3-4b29-8028-3d5d557b4e0b STEP: Creating a pod to test consume secrets Sep 3 13:54:23.894: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b" in namespace "projected-1051" to be "Succeeded or Failed" Sep 3 13:54:23.897: INFO: Pod "pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934002ms Sep 3 13:54:25.901: INFO: Pod "pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006622566s STEP: Saw pod success Sep 3 13:54:25.901: INFO: Pod "pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b" satisfied condition "Succeeded or Failed" Sep 3 13:54:25.904: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b container projected-secret-volume-test: STEP: delete the pod Sep 3 13:54:25.919: INFO: Waiting for pod pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b to disappear Sep 3 13:54:25.922: INFO: Pod pod-projected-secrets-6891920f-a826-4c93-b50c-d82b4984385b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:25.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1051" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":271,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:15.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0903 13:53:25.114594 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 3 13:54:27.133: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 3 13:54:27.133: INFO: Deleting pod "simpletest-rc-to-be-deleted-67l4j" in namespace "gc-8050" Sep 3 13:54:27.141: INFO: Deleting pod "simpletest-rc-to-be-deleted-69lwt" in namespace "gc-8050" Sep 3 13:54:27.146: INFO: Deleting pod "simpletest-rc-to-be-deleted-7k2x4" in namespace "gc-8050" Sep 3 13:54:27.153: INFO: Deleting pod "simpletest-rc-to-be-deleted-8wltc" in namespace "gc-8050" Sep 3 13:54:27.158: INFO: Deleting pod "simpletest-rc-to-be-deleted-d24p8" in namespace "gc-8050" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:27.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8050" for this suite. • [SLOW TEST:72.147 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":8,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:27.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:27.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9575" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":9,"skipped":215,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:25.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 3 13:54:25.974: INFO: Waiting up to 5m0s for pod "var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493" in namespace "var-expansion-7150" to be "Succeeded or Failed" Sep 3 13:54:25.977: INFO: Pod "var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660031ms Sep 3 13:54:27.981: INFO: Pod "var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007047818s STEP: Saw pod success Sep 3 13:54:27.981: INFO: Pod "var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493" satisfied condition "Succeeded or Failed" Sep 3 13:54:27.984: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493 container dapi-container: STEP: delete the pod Sep 3 13:54:27.997: INFO: Waiting for pod var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493 to disappear Sep 3 13:54:28.000: INFO: Pod var-expansion-6164f2a1-8a00-41f2-92d7-13ee9f7ec493 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:28.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7150" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:53:44.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 3 13:53:44.258: INFO: PodSpec: initContainers in spec.initContainers Sep 3 13:54:28.195: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-23b77b3e-6d44-43a4-b215-52335defc0c2", GenerateName:"", Namespace:"init-container-8976", SelfLink:"/api/v1/namespaces/init-container-8976/pods/pod-init-23b77b3e-6d44-43a4-b215-52335defc0c2", UID:"a6e14674-f217-430e-8fc9-54d24d5d8211", ResourceVersion:"1054359", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766274024, loc:(*time.Location)(0x770e980)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"258729008"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000efdca0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000efde20)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000efdf60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000efdfe0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9f6gb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00369a880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9f6gb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9f6gb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9f6gb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004847618), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"capi-kali-md-0-76b6798f7f-5n8xl", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00165fc70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004847690)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0048476b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0048476b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0048476bc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0009d9e60), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274024, loc:(*time.Location)(0x770e980)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274024, loc:(*time.Location)(0x770e980)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274024, loc:(*time.Location)(0x770e980)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274024, loc:(*time.Location)(0x770e980)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.9", PodIP:"192.168.2.95", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.95"}}, StartTime:(*v1.Time)(0xc00194a000), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00165fe30)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00165fea0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://5f24ac4fbfd0c2c8b61b43d08366af18cfb9b837e00e2834261d87f25eb964c3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00194a180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00194a060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00484773f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:28.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8976" for this suite. • [SLOW TEST:43.977 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":16,"skipped":266,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:09.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5537 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5537 Sep 3 13:54:09.407: INFO: Found 0 stateful pods, waiting for 1 Sep 3 13:54:19.412: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 13:54:19.430: INFO: Deleting all statefulset in ns statefulset-5537 Sep 3 13:54:19.434: INFO: Scaling statefulset ss to 0 Sep 3 13:54:29.456: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:54:29.459: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:29.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5537" for this suite. • [SLOW TEST:20.117 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":21,"skipped":398,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:22.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 3 13:54:25.038: INFO: Successfully updated pod "adopt-release-ltnjr" STEP: Checking that the Job readopts the Pod Sep 3 13:54:25.038: INFO: Waiting up to 15m0s for pod "adopt-release-ltnjr" in namespace "job-7930" to be "adopted" Sep 3 13:54:25.040: INFO: Pod "adopt-release-ltnjr": Phase="Running", Reason="", readiness=true. Elapsed: 2.407908ms Sep 3 13:54:27.045: INFO: Pod "adopt-release-ltnjr": Phase="Running", Reason="", readiness=true. Elapsed: 2.006650308s Sep 3 13:54:27.045: INFO: Pod "adopt-release-ltnjr" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 3 13:54:27.555: INFO: Successfully updated pod "adopt-release-ltnjr" STEP: Checking that the Job releases the Pod Sep 3 13:54:27.555: INFO: Waiting up to 15m0s for pod "adopt-release-ltnjr" in namespace "job-7930" to be "released" Sep 3 13:54:27.556: INFO: Pod "adopt-release-ltnjr": Phase="Running", Reason="", readiness=true. Elapsed: 1.566111ms Sep 3 13:54:29.559: INFO: Pod "adopt-release-ltnjr": Phase="Running", Reason="", readiness=true. Elapsed: 2.004130403s Sep 3 13:54:29.559: INFO: Pod "adopt-release-ltnjr" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:29.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7930" for this suite. • [SLOW TEST:7.086 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":27,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:59.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-90b9fad8-3036-4f95-b940-084cba7995f9 STEP: Creating configMap with name cm-test-opt-upd-f3fff721-66e9-4e8c-a7b6-f260f7a3b52a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-90b9fad8-3036-4f95-b940-084cba7995f9 STEP: Updating configmap cm-test-opt-upd-f3fff721-66e9-4e8c-a7b6-f260f7a3b52a STEP: Creating configMap with name cm-test-opt-create-60987650-1ec8-4f25-ae7c-a5e24eeaa461 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:29.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4560" for this suite. • [SLOW TEST:90.610 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:29.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3314" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":21,"skipped":403,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:28.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 3 13:54:30.101: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:30.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9824" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":299,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:29.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 3 13:54:31.539: INFO: Pod pod-hostip-ec8744aa-d4e5-4c87-91d4-0df2905a6e98 has hostIP: 172.18.0.10 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:31.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6604" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":402,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:23.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:54:23.726: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 3 13:54:27.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3495 --namespace=crd-publish-openapi-3495 create -f -' Sep 3 13:54:28.174: INFO: stderr: "" Sep 3 13:54:28.174: INFO: stdout: "e2e-test-crd-publish-openapi-5297-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 3 13:54:28.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3495 --namespace=crd-publish-openapi-3495 delete e2e-test-crd-publish-openapi-5297-crds test-cr' Sep 3 13:54:28.332: INFO: stderr: "" Sep 3 13:54:28.332: INFO: stdout: "e2e-test-crd-publish-openapi-5297-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 3 13:54:28.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3495 --namespace=crd-publish-openapi-3495 apply -f -' Sep 3 13:54:28.666: INFO: stderr: "" Sep 3 13:54:28.666: INFO: stdout: "e2e-test-crd-publish-openapi-5297-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 3 13:54:28.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3495 --namespace=crd-publish-openapi-3495 delete e2e-test-crd-publish-openapi-5297-crds test-cr' Sep 3 13:54:28.789: INFO: stderr: "" Sep 3 13:54:28.789: INFO: stdout: "e2e-test-crd-publish-openapi-5297-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 3 13:54:28.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3495 explain e2e-test-crd-publish-openapi-5297-crds' Sep 3 13:54:29.052: INFO: stderr: "" Sep 3 13:54:29.052: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5297-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:32.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3495" for this suite. • [SLOW TEST:9.301 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":17,"skipped":353,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:29.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-24167bfb-1e30-49f4-b00f-f8cd9b92eb41 STEP: Creating a pod to test consume secrets Sep 3 13:54:30.029: INFO: Waiting up to 5m0s for pod "pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432" in namespace "secrets-9748" to be "Succeeded or Failed" Sep 3 13:54:30.032: INFO: Pod "pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892485ms Sep 3 13:54:32.036: INFO: Pod "pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006712318s Sep 3 13:54:34.116: INFO: Pod "pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087426095s STEP: Saw pod success Sep 3 13:54:34.117: INFO: Pod "pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432" satisfied condition "Succeeded or Failed" Sep 3 13:54:34.120: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432 container secret-volume-test: STEP: delete the pod Sep 3 13:54:34.132: INFO: Waiting for pod pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432 to disappear Sep 3 13:54:34.134: INFO: Pod pod-secrets-b7db565b-b219-49d2-b9e6-70c505b0f432 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:34.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9748" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":411,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:27.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 3 13:54:31.355: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 3 13:54:31.358: INFO: Pod pod-with-poststart-http-hook still exists Sep 3 13:54:33.358: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 3 13:54:33.361: INFO: Pod pod-with-poststart-http-hook still exists Sep 3 13:54:35.358: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 3 13:54:35.420: INFO: Pod pod-with-poststart-http-hook still exists Sep 3 13:54:37.358: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 3 13:54:37.420: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:37.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1946" for this suite. • [SLOW TEST:10.148 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:33.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:54:33.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 create -f -' Sep 3 13:54:33.448: INFO: stderr: "" Sep 3 13:54:33.448: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 3 13:54:33.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 create -f -' Sep 3 13:54:33.711: INFO: stderr: "" Sep 3 13:54:33.711: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 3 13:54:34.816: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:54:34.816: INFO: Found 0 / 1 Sep 3 13:54:36.017: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:54:36.017: INFO: Found 1 / 1 Sep 3 13:54:36.017: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 3 13:54:36.226: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:54:36.226: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 3 13:54:36.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 describe pod agnhost-primary-wqxr2' Sep 3 13:54:36.366: INFO: stderr: "" Sep 3 13:54:36.366: INFO: stdout: "Name: agnhost-primary-wqxr2\nNamespace: kubectl-9918\nPriority: 0\nNode: capi-kali-md-0-76b6798f7f-5n8xl/172.18.0.9\nStart Time: Fri, 03 Sep 2021 13:54:33 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 192.168.2.108\nIPs:\n IP: 192.168.2.108\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://e2e7ec07e6fc160ce9a0f3782e3e067b26146044fe64143d694ef633b29dfd0c\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 03 Sep 2021 13:54:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-cgvzh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-cgvzh:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-cgvzh\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9918/agnhost-primary-wqxr2 to capi-kali-md-0-76b6798f7f-5n8xl\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Sep 3 13:54:36.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 describe rc agnhost-primary' Sep 3 13:54:36.509: INFO: stderr: "" Sep 3 13:54:36.510: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9918\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-wqxr2\n" Sep 3 13:54:36.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 describe service agnhost-primary' Sep 3 13:54:36.637: INFO: stderr: "" Sep 3 13:54:36.638: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9918\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.137.65.111\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.2.108:6379\nSession Affinity: None\nEvents: \n" Sep 3 13:54:36.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 describe node capi-kali-control-plane-ltrkf' Sep 3 13:54:37.428: INFO: stderr: "" Sep 3 13:54:37.428: INFO: stdout: "Name: capi-kali-control-plane-ltrkf\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=capi-kali-control-plane-ltrkf\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: cluster.x-k8s.io/cluster-name: capi-kali\n cluster.x-k8s.io/cluster-namespace: default\n cluster.x-k8s.io/machine: capi-kali-control-plane-ltrkf\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: capi-kali-control-plane\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Mon, 30 Aug 2021 14:56:33 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: capi-kali-control-plane-ltrkf\n AcquireTime: \n RenewTime: Fri, 03 Sep 2021 13:54:34 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 03 Sep 2021 13:53:54 +0000 Mon, 30 Aug 2021 14:56:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 03 Sep 2021 13:53:54 +0000 Mon, 30 Aug 2021 14:56:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 03 Sep 2021 13:53:54 +0000 Mon, 30 Aug 2021 14:56:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 03 Sep 2021 13:53:54 +0000 Mon, 30 Aug 2021 14:57:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.6\n Hostname: capi-kali-control-plane-ltrkf\nCapacity:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nAllocatable:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nSystem Info:\n Machine ID: 61fc7f13ab4343dcae82c9c4cfa51265\n System UUID: 57e46b97-52ff-4f5e-8ec9-cefa8958b97b\n Boot ID: 9c7607d1-20a7-4968-bf1d-785d5386df51\n Kernel Version: 5.4.0-73-generic\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.1\n Kubelet Version: v1.19.11\n Kube-Proxy Version: v1.19.11\nPodCIDR: 192.168.0.0/24\nPodCIDRs: 192.168.0.0/24\nProviderID: docker:////capi-kali-control-plane-ltrkf\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system create-loop-devs-cp657 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\n kube-system etcd-capi-kali-control-plane-ltrkf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\n kube-system kindnet-t8kx4 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 3d22h\n kube-system kube-apiserver-capi-kali-control-plane-ltrkf 250m (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\n kube-system kube-controller-manager-capi-kali-control-plane-ltrkf 200m (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\n kube-system kube-proxy-zfsk9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\n kube-system kube-scheduler-capi-kali-control-plane-ltrkf 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\n kube-system tune-sysctls-xcxpj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d22h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (0%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning SystemOOM 26m kubelet System OOM encountered, victim process: kindnetd, pid: 1008586\n" Sep 3 13:54:37.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9918 describe namespace kubectl-9918' Sep 3 13:54:37.733: INFO: stderr: "" Sep 3 13:54:37.733: INFO: stdout: "Name: kubectl-9918\nLabels: e2e-framework=kubectl\n e2e-run=31760baa-a6cc-46d3-8558-accd7217696d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:37.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9918" for this suite. • [SLOW TEST:5.020 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":18,"skipped":357,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:30.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:54:31.005: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Sep 3 13:54:33.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274071, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274071, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274071, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274071, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:54:36.420: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 3 13:54:38.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-2649 attach --namespace=webhook-2649 to-be-attached-pod -i -c=container1' Sep 3 13:54:38.855: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:39.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2649" for this suite. STEP: Destroying namespace "webhook-2649-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.241 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":22,"skipped":301,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:03.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8331.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8331.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.239.138.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.138.239.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.239.138.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.138.239.67_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8331.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8331.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8331.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8331.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8331.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.239.138.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.138.239.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.239.138.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.138.239.67_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:54:08.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.063: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.067: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.094: INFO: Unable to read jessie_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.098: INFO: Unable to read jessie_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.101: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.106: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:08.129: INFO: Lookups using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 failed for: [wheezy_udp@dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_udp@dns-test-service.dns-8331.svc.cluster.local jessie_tcp@dns-test-service.dns-8331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local] Sep 3 13:54:13.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.141: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.145: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.175: INFO: Unable to read jessie_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.182: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:13.209: INFO: Lookups using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 failed for: [wheezy_udp@dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_udp@dns-test-service.dns-8331.svc.cluster.local jessie_tcp@dns-test-service.dns-8331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local] Sep 3 13:54:18.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.145: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.172: INFO: Unable to read jessie_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.180: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.183: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:18.205: INFO: Lookups using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 failed for: [wheezy_udp@dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_udp@dns-test-service.dns-8331.svc.cluster.local jessie_tcp@dns-test-service.dns-8331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local] Sep 3 13:54:23.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.146: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.173: INFO: Unable to read jessie_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.177: INFO: Unable to read jessie_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.182: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:23.209: INFO: Lookups using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 failed for: [wheezy_udp@dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_udp@dns-test-service.dns-8331.svc.cluster.local jessie_tcp@dns-test-service.dns-8331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local] Sep 3 13:54:28.133: INFO: Unable to read wheezy_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.141: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.169: INFO: Unable to read jessie_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.176: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.179: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:28.200: INFO: Lookups using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 failed for: [wheezy_udp@dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_udp@dns-test-service.dns-8331.svc.cluster.local jessie_tcp@dns-test-service.dns-8331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local] Sep 3 13:54:33.133: INFO: Unable to read wheezy_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.143: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.164: INFO: Unable to read jessie_udp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.167: INFO: Unable to read jessie_tcp@dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.170: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local from pod dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20: the server could not find the requested resource (get pods dns-test-beaee231-2193-4706-972d-f2072c050e20) Sep 3 13:54:33.190: INFO: Lookups using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 failed for: [wheezy_udp@dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@dns-test-service.dns-8331.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_udp@dns-test-service.dns-8331.svc.cluster.local jessie_tcp@dns-test-service.dns-8331.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8331.svc.cluster.local] Sep 3 13:54:39.370: INFO: DNS probes using dns-8331/dns-test-beaee231-2193-4706-972d-f2072c050e20 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:39.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8331" for this suite. • [SLOW TEST:35.421 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":26,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:29.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 3 13:54:29.657: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-23 /api/v1/namespaces/watch-23/configmaps/e2e-watch-test-label-changed e245eca8-250c-4f07-b14d-edc5be982e33 1054420 0 2021-09-03 13:54:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:54:29.657: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-23 /api/v1/namespaces/watch-23/configmaps/e2e-watch-test-label-changed e245eca8-250c-4f07-b14d-edc5be982e33 1054421 0 2021-09-03 13:54:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:54:29.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-23 /api/v1/namespaces/watch-23/configmaps/e2e-watch-test-label-changed e245eca8-250c-4f07-b14d-edc5be982e33 1054422 0 2021-09-03 13:54:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 3 13:54:39.682: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-23 /api/v1/namespaces/watch-23/configmaps/e2e-watch-test-label-changed e245eca8-250c-4f07-b14d-edc5be982e33 1054890 0 2021-09-03 13:54:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:54:39.682: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-23 /api/v1/namespaces/watch-23/configmaps/e2e-watch-test-label-changed e245eca8-250c-4f07-b14d-edc5be982e33 1054891 0 2021-09-03 13:54:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 3 13:54:39.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-23 /api/v1/namespaces/watch-23/configmaps/e2e-watch-test-label-changed e245eca8-250c-4f07-b14d-edc5be982e33 1054892 0 2021-09-03 13:54:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-23" for this suite. • [SLOW TEST:10.077 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":28,"skipped":652,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:37.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 3 13:54:37.738: INFO: Waiting up to 5m0s for pod "pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d" in namespace "emptydir-8478" to be "Succeeded or Failed" Sep 3 13:54:38.027: INFO: Pod "pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d": Phase="Pending", Reason="", readiness=false. Elapsed: 289.475253ms Sep 3 13:54:40.031: INFO: Pod "pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293226534s Sep 3 13:54:42.035: INFO: Pod "pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.297502391s STEP: Saw pod success Sep 3 13:54:42.036: INFO: Pod "pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d" satisfied condition "Succeeded or Failed" Sep 3 13:54:42.038: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d container test-container: STEP: delete the pod Sep 3 13:54:42.052: INFO: Waiting for pod pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d to disappear Sep 3 13:54:42.055: INFO: Pod pod-92b4e1dd-42de-44b3-9ce0-af749713ca9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:42.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8478" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":245,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:38.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-82d2145d-f4a6-4e20-9193-41e537ce3732 STEP: Creating a pod to test consume secrets Sep 3 13:54:38.118: INFO: Waiting up to 5m0s for pod "pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4" in namespace "secrets-3072" to be "Succeeded or Failed" Sep 3 13:54:38.329: INFO: Pod "pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4": Phase="Pending", Reason="", readiness=false. Elapsed: 210.740123ms Sep 3 13:54:40.332: INFO: Pod "pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214435125s Sep 3 13:54:42.336: INFO: Pod "pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.217988563s STEP: Saw pod success Sep 3 13:54:42.336: INFO: Pod "pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4" satisfied condition "Succeeded or Failed" Sep 3 13:54:42.339: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4 container secret-volume-test: STEP: delete the pod Sep 3 13:54:42.354: INFO: Waiting for pod pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4 to disappear Sep 3 13:54:42.356: INFO: Pod pod-secrets-a09b39de-d484-4ed6-bf96-d8ee2ebb07a4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:42.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3072" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":367,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:34.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 3 13:54:38.627: INFO: &Pod{ObjectMeta:{send-events-07d80332-e8b4-480c-a8c9-1158b6c6b927 events-1349 /api/v1/namespaces/events-1349/pods/send-events-07d80332-e8b4-480c-a8c9-1158b6c6b927 7d2e8fcc-8281-4b64-ad0e-23e4ed5532b2 1054748 0 2021-09-03 13:54:34 +0000 UTC map[name:foo time:230814359] map[] [] [] [{e2e.test Update v1 2021-09-03 13:54:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cn9mc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cn9mc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cn9mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.177,StartTime:2021-09-03 13:54:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://5ab302ef6a36ec159e32ab672e167ca75f691f8b888d070aff54387bf495f424,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 3 13:54:40.631: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 3 13:54:42.635: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:42.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1349" for this suite. • [SLOW TEST:8.580 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":23,"skipped":413,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:04.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1345.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1345.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:54:06.500: INFO: DNS probes using dns-test-d7e4db48-1488-49f5-b8de-7c9e02bc8a8b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1345.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1345.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:54:08.538: INFO: File wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:08.542: INFO: File jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:08.542: INFO: Lookups using dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 failed for: [wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local] Sep 3 13:54:13.546: INFO: File wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:13.550: INFO: File jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:13.550: INFO: Lookups using dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 failed for: [wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local] Sep 3 13:54:18.547: INFO: File wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:18.551: INFO: File jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:18.551: INFO: Lookups using dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 failed for: [wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local] Sep 3 13:54:23.547: INFO: File wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:23.551: INFO: File jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:23.551: INFO: Lookups using dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 failed for: [wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local] Sep 3 13:54:28.554: INFO: File wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:28.558: INFO: File jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:28.558: INFO: Lookups using dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 failed for: [wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local] Sep 3 13:54:33.547: INFO: File wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:33.550: INFO: File jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local from pod dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 3 13:54:33.550: INFO: Lookups using dns-1345/dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 failed for: [wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local] Sep 3 13:54:38.729: INFO: DNS probes using dns-test-148de7c9-02ee-47af-ba97-d3db7e4e3b93 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1345.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1345.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1345.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1345.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:54:43.363: INFO: DNS probes using dns-test-8d554de7-be4c-40c7-b361-d2087dce7913 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:43.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1345" for this suite. • [SLOW TEST:38.944 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":156,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:42.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:54:42.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75" in namespace "projected-2061" to be "Succeeded or Failed" Sep 3 13:54:42.410: INFO: Pod "downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.601714ms Sep 3 13:54:44.413: INFO: Pod "downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005202694s Sep 3 13:54:46.417: INFO: Pod "downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00944753s Sep 3 13:54:48.420: INFO: Pod "downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012124587s STEP: Saw pod success Sep 3 13:54:48.420: INFO: Pod "downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75" satisfied condition "Succeeded or Failed" Sep 3 13:54:48.422: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75 container client-container: STEP: delete the pod Sep 3 13:54:48.433: INFO: Waiting for pod downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75 to disappear Sep 3 13:54:48.435: INFO: Pod downwardapi-volume-2dc59445-a81c-4407-a37f-f83c1d04ea75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:48.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2061" for this suite. • [SLOW TEST:6.069 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":370,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:39.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8625 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8625 I0903 13:54:39.826847 32 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8625, replica count: 2 I0903 13:54:42.877394 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0903 13:54:45.877654 32 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:54:45.877: INFO: Creating new exec pod Sep 3 13:54:48.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8625 exec execpodtql68 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 3 13:54:49.143: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Sep 3 13:54:49.143: INFO: stdout: "" Sep 3 13:54:49.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8625 exec execpodtql68 -- /bin/sh -x -c nc -zv -t -w 2 10.132.0.232 80' Sep 3 13:54:49.354: INFO: stderr: "+ nc -zv -t -w 2 10.132.0.232 80\nConnection to 10.132.0.232 80 port [tcp/http] succeeded!\n" Sep 3 13:54:49.355: INFO: stdout: "" Sep 3 13:54:49.355: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:49.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8625" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.666 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":29,"skipped":662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:43.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 3 13:54:49.967: INFO: Successfully updated pod "annotationupdate1ea9aec3-3d93-4a43-8409-6f16785dc05d" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:51.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3196" for this suite. • [SLOW TEST:8.593 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:48.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:52.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2005" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":375,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:31.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-fn9k STEP: Creating a pod to test atomic-volume-subpath Sep 3 13:54:31.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fn9k" in namespace "subpath-2835" to be "Succeeded or Failed" Sep 3 13:54:31.593: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241828ms Sep 3 13:54:33.596: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 2.006498136s Sep 3 13:54:36.016: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 4.42623431s Sep 3 13:54:38.029: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 6.439709965s Sep 3 13:54:40.033: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 8.442877863s Sep 3 13:54:42.036: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 10.446299562s Sep 3 13:54:44.040: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 12.450452059s Sep 3 13:54:46.044: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 14.453944062s Sep 3 13:54:48.048: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 16.457779848s Sep 3 13:54:50.051: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 18.461119932s Sep 3 13:54:52.054: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Running", Reason="", readiness=true. Elapsed: 20.464322015s Sep 3 13:54:54.057: INFO: Pod "pod-subpath-test-downwardapi-fn9k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.467118771s STEP: Saw pod success Sep 3 13:54:54.057: INFO: Pod "pod-subpath-test-downwardapi-fn9k" satisfied condition "Succeeded or Failed" Sep 3 13:54:54.060: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-subpath-test-downwardapi-fn9k container test-container-subpath-downwardapi-fn9k: STEP: delete the pod Sep 3 13:54:54.406: INFO: Waiting for pod pod-subpath-test-downwardapi-fn9k to disappear Sep 3 13:54:54.408: INFO: Pod pod-subpath-test-downwardapi-fn9k no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fn9k Sep 3 13:54:54.408: INFO: Deleting pod "pod-subpath-test-downwardapi-fn9k" in namespace "subpath-2835" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2835" for this suite. • [SLOW TEST:22.866 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":403,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:42.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:55.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1259" for this suite. • [SLOW TEST:13.091 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":24,"skipped":423,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:39.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5953 STEP: creating service affinity-clusterip in namespace services-5953 STEP: creating replication controller affinity-clusterip in namespace services-5953 I0903 13:54:39.407750 28 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-5953, replica count: 3 I0903 13:54:42.458238 28 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:54:42.463: INFO: Creating new exec pod Sep 3 13:54:47.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5953 exec execpod-affinitytvs9w -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 3 13:54:47.716: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Sep 3 13:54:47.716: INFO: stdout: "" Sep 3 13:54:47.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5953 exec execpod-affinitytvs9w -- /bin/sh -x -c nc -zv -t -w 2 10.132.9.9 80' Sep 3 13:54:47.956: INFO: stderr: "+ nc -zv -t -w 2 10.132.9.9 80\nConnection to 10.132.9.9 80 port [tcp/http] succeeded!\n" Sep 3 13:54:47.956: INFO: stdout: "" Sep 3 13:54:47.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5953 exec execpod-affinitytvs9w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.132.9.9:80/ ; done' Sep 3 13:54:48.294: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.9.9:80/\n" Sep 3 13:54:48.295: INFO: stdout: "\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98\naffinity-clusterip-g6n98" Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Received response from host: affinity-clusterip-g6n98 Sep 3 13:54:48.295: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-5953, will wait for the garbage collector to delete the pods Sep 3 13:54:48.367: INFO: Deleting ReplicationController affinity-clusterip took: 5.428727ms Sep 3 13:54:48.467: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.264628ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:57.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5953" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.912 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:55.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:54:55.875: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-dbeceda9-edcd-44a2-a415-d0d3867293f2" in namespace "security-context-test-4916" to be "Succeeded or Failed" Sep 3 13:54:55.878: INFO: Pod "busybox-readonly-false-dbeceda9-edcd-44a2-a415-d0d3867293f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193801ms Sep 3 13:54:57.881: INFO: Pod "busybox-readonly-false-dbeceda9-edcd-44a2-a415-d0d3867293f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005692627s Sep 3 13:54:57.881: INFO: Pod "busybox-readonly-false-dbeceda9-edcd-44a2-a415-d0d3867293f2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:54:57.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4916" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":424,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:52.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:54:52.083: INFO: Creating deployment "webserver-deployment" Sep 3 13:54:52.086: INFO: Waiting for observed generation 1 Sep 3 13:54:54.091: INFO: Waiting for all required pods to come up Sep 3 13:54:54.095: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 3 13:54:58.102: INFO: Waiting for deployment "webserver-deployment" to complete Sep 3 13:54:58.109: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 3 13:54:58.117: INFO: Updating deployment webserver-deployment Sep 3 13:54:58.117: INFO: Waiting for observed generation 2 Sep 3 13:55:00.416: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 3 13:55:00.419: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 3 13:55:00.422: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 3 13:55:00.429: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 3 13:55:00.429: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 3 13:55:00.432: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 3 13:55:00.819: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 3 13:55:00.819: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 3 13:55:01.023: INFO: Updating deployment webserver-deployment Sep 3 13:55:01.023: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 3 13:55:01.030: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 3 13:55:01.033: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 13:55:01.043: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8902 /apis/apps/v1/namespaces/deployment-8902/deployments/webserver-deployment c6241dbe-902d-4a93-b4ff-c7844073fce2 1055687 3 2021-09-03 13:54:52 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0031468b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-09-03 13:54:58 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-09-03 13:55:01 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 3 13:55:01.047: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8902 /apis/apps/v1/namespaces/deployment-8902/replicasets/webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 1055681 3 2021-09-03 13:54:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c6241dbe-902d-4a93-b4ff-c7844073fce2 0xc003146d57 0xc003146d58}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6241dbe-902d-4a93-b4ff-c7844073fce2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003146df8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:55:01.047: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 3 13:55:01.047: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-8902 /apis/apps/v1/namespaces/deployment-8902/replicasets/webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 1055677 3 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c6241dbe-902d-4a93-b4ff-c7844073fce2 0xc003146e57 0xc003146e58}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:54:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6241dbe-902d-4a93-b4ff-c7844073fce2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003146ec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:55:01.057: INFO: Pod "webserver-deployment-795d758f88-56zv5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-56zv5 webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-56zv5 84dc35aa-d1cf-465e-ac54-84314b228d48 1055708 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc0031473d7 0xc0031473d8}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.058: INFO: Pod "webserver-deployment-795d758f88-7xqr7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7xqr7 webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-7xqr7 1bc8dfc6-a95e-48ea-857c-5609f8d63531 1055696 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc0031474f0 0xc0031474f1}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.058: INFO: Pod "webserver-deployment-795d758f88-dfzmb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dfzmb webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-dfzmb c2932b00-a00b-46f9-bc35-4959891a71c0 1055704 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147610 0xc003147611}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.058: INFO: Pod "webserver-deployment-795d758f88-f7pbw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-f7pbw webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-f7pbw f20ad49c-f65f-4d14-a11f-217e061f62ef 1055710 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147730 0xc003147731}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.058: INFO: Pod "webserver-deployment-795d758f88-fxm8q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fxm8q webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-fxm8q d20babc5-4e63-499f-b322-d167def9fbbe 1055707 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147850 0xc003147851}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.059: INFO: Pod "webserver-deployment-795d758f88-h8m48" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h8m48 webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-h8m48 6b69cb92-d620-4190-9b57-3ef36a950c65 1055655 0 2021-09-03 13:54:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147980 0xc003147981}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:55:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.189,StartTime:2021-09-03 13:54:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.059: INFO: Pod "webserver-deployment-795d758f88-j7cxl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j7cxl webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-j7cxl bb999782-e7be-47cf-a10a-012622e7f393 1055629 0 2021-09-03 13:54:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147b47 0xc003147b48}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-09-03 13:54:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.059: INFO: Pod "webserver-deployment-795d758f88-mcsst" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mcsst webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-mcsst 3b68717c-b040-4007-a3e6-edaa8c339355 1055690 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147ce7 0xc003147ce8}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.059: INFO: Pod "webserver-deployment-795d758f88-qh5h4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qh5h4 webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-qh5h4 dd1d5b1e-29e9-4190-bcd7-65ea924f2747 1055635 0 2021-09-03 13:54:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147e17 0xc003147e18}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2021-09-03 13:54:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.059: INFO: Pod "webserver-deployment-795d758f88-qpvzj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qpvzj webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-qpvzj 14a10082-94df-46f6-9117-1d9386ee7a2d 1055647 0 2021-09-03 13:54:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003147fb7 0xc003147fb8}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-09-03 13:54:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.060: INFO: Pod "webserver-deployment-795d758f88-twrlr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-twrlr webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-twrlr ad6a4e29-3c12-46ae-b7ac-ee8519c2a652 1055711 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003f08157 0xc003f08158}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.060: INFO: Pod "webserver-deployment-795d758f88-tx5h6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tx5h6 webserver-deployment-795d758f88- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-795d758f88-tx5h6 e289961d-a76b-4289-9da9-0e6ff0ffd9c9 1055634 0 2021-09-03 13:54:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 91ece1c7-8c77-4310-aacf-1a201228d6ef 0xc003f08287 0xc003f08288}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91ece1c7-8c77-4310-aacf-1a201228d6ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-09-03 13:54:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.060: INFO: Pod "webserver-deployment-dd94f59b7-4fv7h" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4fv7h webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-4fv7h a4649e10-5eb5-4033-bee1-ca9399afe981 1055697 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08437 0xc003f08438}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.060: INFO: Pod "webserver-deployment-dd94f59b7-5gmg4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5gmg4 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-5gmg4 a7d147ca-c824-40a4-b4c2-96b0c55188d1 1055474 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08540 0xc003f08541}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.187,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9fbc4848e38f78cc1c1ecd59cce39e75364bdb0692c1fcac17bf72261bd1a141,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.061: INFO: Pod "webserver-deployment-dd94f59b7-77szp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-77szp webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-77szp 12572ce7-f014-4b79-88a1-aee4609981f2 1055713 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f086d7 0xc003f086d8}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.061: INFO: Pod "webserver-deployment-dd94f59b7-bhhjk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bhhjk webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-bhhjk 525f6b65-b61b-471b-b0a2-a15ccdfa0e85 1055548 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f087e0 0xc003f087e1}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.126\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.126,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c5cf68dc86a71b7c9aeffcea916eb691f6f5a28539c449978098adfc3fd4dbc4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.061: INFO: Pod "webserver-deployment-dd94f59b7-cblqc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cblqc webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-cblqc d89d5060-fc6e-4e08-aa04-ac1a18c9632c 1055499 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08977 0xc003f08978}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.125,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c81e57d35ef2da5f4b9d99c9f453cc298e67af2d8ff8d26f05695fa0ec12aa4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.061: INFO: Pod "webserver-deployment-dd94f59b7-crkbk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-crkbk webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-crkbk 968a267b-7370-47ea-9c97-ef8197a53b02 1055698 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08b17 0xc003f08b18}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.062: INFO: Pod "webserver-deployment-dd94f59b7-cwm9j" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cwm9j webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-cwm9j 137d1b8b-5a5f-45b3-9139-c2fd270b5fde 1055712 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08c37 0xc003f08c38}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.062: INFO: Pod "webserver-deployment-dd94f59b7-gflnq" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gflnq webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-gflnq d7f855f8-86a6-4b43-aabd-2744f0ae7831 1055483 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08d50 0xc003f08d51}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.122\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.122,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2c969aa929bc282a741f01003f4b08b5b73fab0cf991cb19d85a1b7e53169785,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.062: INFO: Pod "webserver-deployment-dd94f59b7-knch2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-knch2 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-knch2 8d039a6d-3d1c-4e20-9342-3cd9ddd8c6a2 1055701 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f08ee7 0xc003f08ee8}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.062: INFO: Pod "webserver-deployment-dd94f59b7-kt6jf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kt6jf webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-kt6jf cad6c88f-9e36-4c04-bacf-484226377396 1055716 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09007 0xc003f09008}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.063: INFO: Pod "webserver-deployment-dd94f59b7-llcfj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-llcfj webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-llcfj 9fe389ab-4242-45c8-8a03-ab8123e30926 1055505 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09110 0xc003f09111}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.186\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.186,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://be3cd08a6a16dfcc0e1e8b27dbdbf4707ccd68d7852ab73c83531f889f19c2e7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.063: INFO: Pod "webserver-deployment-dd94f59b7-q6w85" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-q6w85 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-q6w85 d623a605-49c8-4210-9b84-dbab95aeb927 1055714 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f092a7 0xc003f092a8}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.063: INFO: Pod "webserver-deployment-dd94f59b7-qchwh" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qchwh webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-qchwh 814f9cf1-7ba7-418e-b465-b362ca14c6eb 1055508 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f093b0 0xc003f093b1}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.123,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9be9e4a6a74ea92fea098a9e97c4435c3a068ebf0601541db198c3ea70bdc541,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.063: INFO: Pod "webserver-deployment-dd94f59b7-r5sm6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-r5sm6 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-r5sm6 6971e913-2180-4a53-8e81-e1e2ead6852f 1055709 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09557 0xc003f09558}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.064: INFO: Pod "webserver-deployment-dd94f59b7-r8v8f" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-r8v8f webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-r8v8f 6de797d0-e3a2-4e3f-adab-efe6ead0f05d 1055686 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09677 0xc003f09678}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.064: INFO: Pod "webserver-deployment-dd94f59b7-rgcmw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rgcmw webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-rgcmw b85623f6-b0b3-4835-a965-4ed3ceaa357a 1055477 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09797 0xc003f09798}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.124,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://09deed558c8602af9a27b5d9072185f793cfb2957bb3d6ee3f9fe556c6a9217a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.064: INFO: Pod "webserver-deployment-dd94f59b7-rj6x5" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rj6x5 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-rj6x5 6a3ffe04-c989-4ae8-b71a-8e216cc214c6 1055466 0 2021-09-03 13:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09937 0xc003f09938}] [] [{kube-controller-manager Update v1 2021-09-03 13:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:54:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:192.168.1.184,StartTime:2021-09-03 13:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:54:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c5f9f7b88cee63ea655bc0a555a4bbf43908d5ed47758c2c542cd1a79074aa34,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.064: INFO: Pod "webserver-deployment-dd94f59b7-rsh42" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rsh42 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-rsh42 b95db868-4f5a-4448-afac-f697817c131e 1055695 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09ad7 0xc003f09ad8}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.065: INFO: Pod "webserver-deployment-dd94f59b7-s5br2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-s5br2 webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-s5br2 080f4253-4f2c-4e7f-a6ed-eb0d62cba48c 1055715 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09be0 0xc003f09be1}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 3 13:55:01.065: INFO: Pod "webserver-deployment-dd94f59b7-tppvb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tppvb webserver-deployment-dd94f59b7- deployment-8902 /api/v1/namespaces/deployment-8902/pods/webserver-deployment-dd94f59b7-tppvb b3525640-059e-4378-bbb4-eeb127655df0 1055689 0 2021-09-03 13:55:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba2b797e-9807-4000-b4ea-20ff95668f26 0xc003f09cf0 0xc003f09cf1}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2b797e-9807-4000-b4ea-20ff95668f26\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjjfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjjfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjjfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-7jvhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:01.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8902" for this suite. • [SLOW TEST:9.016 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":9,"skipped":203,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:57.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 3 13:54:57.931: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:04.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3058" for this suite. • [SLOW TEST:6.744 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":426,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:04.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 3 13:55:04.709: INFO: starting watch STEP: patching STEP: updating Sep 3 13:55:04.718: INFO: waiting for watch events with expected annotations Sep 3 13:55:04.718: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:04.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-3483" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":27,"skipped":434,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:01.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:55:01.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521" in namespace "projected-2150" to be "Succeeded or Failed" Sep 3 13:55:01.109: INFO: Pod "downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040032ms Sep 3 13:55:03.113: INFO: Pod "downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006021972s Sep 3 13:55:05.117: INFO: Pod "downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009956402s Sep 3 13:55:07.120: INFO: Pod "downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013656214s STEP: Saw pod success Sep 3 13:55:07.121: INFO: Pod "downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521" satisfied condition "Succeeded or Failed" Sep 3 13:55:07.122: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521 container client-container: STEP: delete the pod Sep 3 13:55:07.134: INFO: Waiting for pod downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521 to disappear Sep 3 13:55:07.136: INFO: Pod downwardapi-volume-ba6ae59b-9599-456a-aa89-2ad3769a3521 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:07.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2150" for this suite. • [SLOW TEST:6.065 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:39.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-355 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 3 13:54:39.483: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 3 13:54:39.501: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:54:41.519: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:54:43.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:45.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:47.503: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:49.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:51.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:53.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:55.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:57.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:54:59.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:55:01.504: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 3 13:55:01.510: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 3 13:55:07.528: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.140:8080/dial?request=hostname&protocol=http&host=192.168.2.114&port=8080&tries=1'] Namespace:pod-network-test-355 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:07.528: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:07.656: INFO: Waiting for responses: map[] Sep 3 13:55:07.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.140:8080/dial?request=hostname&protocol=http&host=192.168.1.180&port=8080&tries=1'] Namespace:pod-network-test-355 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:07.659: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:07.767: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:07.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-355" for this suite. • [SLOW TEST:28.318 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":658,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:07.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 3 13:55:07.756: INFO: created pod pod-service-account-defaultsa Sep 3 13:55:07.756: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 3 13:55:07.758: INFO: created pod pod-service-account-mountsa Sep 3 13:55:07.758: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 3 13:55:07.762: INFO: created pod pod-service-account-nomountsa Sep 3 13:55:07.762: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 3 13:55:07.764: INFO: created pod pod-service-account-defaultsa-mountspec Sep 3 13:55:07.764: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 3 13:55:07.767: INFO: created pod pod-service-account-mountsa-mountspec Sep 3 13:55:07.767: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 3 13:55:07.770: INFO: created pod pod-service-account-nomountsa-mountspec Sep 3 13:55:07.770: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 3 13:55:07.778: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 3 13:55:07.778: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 3 13:55:07.781: INFO: created pod pod-service-account-mountsa-nomountspec Sep 3 13:55:07.781: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 3 13:55:07.784: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 3 13:55:07.784: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:07.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3073" for this suite. •S ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":11,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:57.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:08.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-169" for this suite. • [SLOW TEST:11.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:42.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-972b070c-da78-43d8-a2f9-e3feb494f9f2 in namespace container-probe-2248 Sep 3 13:54:48.112: INFO: Started pod liveness-972b070c-da78-43d8-a2f9-e3feb494f9f2 in namespace container-probe-2248 STEP: checking the pod's current state and verifying that restartCount is present Sep 3 13:54:48.114: INFO: Initial restart count of pod liveness-972b070c-da78-43d8-a2f9-e3feb494f9f2 is 0 Sep 3 13:55:08.427: INFO: Restart count of pod container-probe-2248/liveness-972b070c-da78-43d8-a2f9-e3feb494f9f2 is now 1 (20.312514826s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:08.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2248" for this suite. • [SLOW TEST:26.372 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":247,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:54.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9853" for this suite. • [SLOW TEST:16.433 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":24,"skipped":409,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:28.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 3 13:54:28.423: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 3 13:54:45.603: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:54:49.555: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:11.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2879" for this suite. • [SLOW TEST:43.243 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":17,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:04.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 3 13:55:04.801: INFO: Waiting up to 5m0s for pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f" in namespace "downward-api-5037" to be "Succeeded or Failed" Sep 3 13:55:04.803: INFO: Pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490694ms Sep 3 13:55:06.806: INFO: Pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005389181s Sep 3 13:55:08.810: INFO: Pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008763751s Sep 3 13:55:10.813: INFO: Pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011890699s Sep 3 13:55:12.820: INFO: Pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019552269s STEP: Saw pod success Sep 3 13:55:12.821: INFO: Pod "downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f" satisfied condition "Succeeded or Failed" Sep 3 13:55:12.824: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f container dapi-container: STEP: delete the pod Sep 3 13:55:12.839: INFO: Waiting for pod downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f to disappear Sep 3 13:55:12.841: INFO: Pod downward-api-bc6b0e8e-7971-4357-a2a8-6611dfec583f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:12.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5037" for this suite. • [SLOW TEST:8.082 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":444,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:52.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 3 13:54:52.541: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:54:56.983: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:15.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-507" for this suite. • [SLOW TEST:22.901 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":22,"skipped":384,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:10.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 3 13:55:10.915: INFO: Waiting up to 5m0s for pod "pod-56163dfc-6172-4eed-968a-b38b0ccdd212" in namespace "emptydir-8540" to be "Succeeded or Failed" Sep 3 13:55:10.917: INFO: Pod "pod-56163dfc-6172-4eed-968a-b38b0ccdd212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039723ms Sep 3 13:55:12.919: INFO: Pod "pod-56163dfc-6172-4eed-968a-b38b0ccdd212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004398324s Sep 3 13:55:14.923: INFO: Pod "pod-56163dfc-6172-4eed-968a-b38b0ccdd212": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007542768s Sep 3 13:55:16.926: INFO: Pod "pod-56163dfc-6172-4eed-968a-b38b0ccdd212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011031844s STEP: Saw pod success Sep 3 13:55:16.926: INFO: Pod "pod-56163dfc-6172-4eed-968a-b38b0ccdd212" satisfied condition "Succeeded or Failed" Sep 3 13:55:16.929: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-56163dfc-6172-4eed-968a-b38b0ccdd212 container test-container: STEP: delete the pod Sep 3 13:55:16.941: INFO: Waiting for pod pod-56163dfc-6172-4eed-968a-b38b0ccdd212 to disappear Sep 3 13:55:16.943: INFO: Pod pod-56163dfc-6172-4eed-968a-b38b0ccdd212 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:16.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8540" for this suite. • [SLOW TEST:6.067 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:07.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:17.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3500" for this suite. • [SLOW TEST:10.056 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:17.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:55:17.961: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:18.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-359" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":13,"skipped":280,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:07.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 3 13:55:07.840: INFO: Waiting up to 5m0s for pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f" in namespace "var-expansion-7305" to be "Succeeded or Failed" Sep 3 13:55:07.842: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267467ms Sep 3 13:55:09.853: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012689865s Sep 3 13:55:11.856: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016304358s Sep 3 13:55:13.860: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01987747s Sep 3 13:55:15.863: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022942526s Sep 3 13:55:17.866: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025810078s Sep 3 13:55:19.870: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029549777s STEP: Saw pod success Sep 3 13:55:19.870: INFO: Pod "var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f" satisfied condition "Succeeded or Failed" Sep 3 13:55:19.873: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f container dapi-container: STEP: delete the pod Sep 3 13:55:19.888: INFO: Waiting for pod var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f to disappear Sep 3 13:55:19.892: INFO: Pod var-expansion-cd5cfb01-e595-46f9-aeeb-a4061a1b866f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:19.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7305" for this suite. • [SLOW TEST:12.093 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":676,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:19.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:19.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2337" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":29,"skipped":685,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:12.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 3 13:55:19.428: INFO: Successfully updated pod "annotationupdate5b05a5ef-1885-42ca-a8f5-8979ef2ebd50" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:21.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5068" for this suite. • [SLOW TEST:8.583 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":459,"failed":0} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":24,"skipped":326,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:08.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:55:08.417: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 3 13:55:08.424: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 3 13:55:13.427: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 3 13:55:15.433: INFO: Creating deployment "test-rolling-update-deployment" Sep 3 13:55:15.437: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 3 13:55:15.441: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 3 13:55:17.448: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 3 13:55:17.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:55:19.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274115, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:55:21.455: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 13:55:21.464: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6463 /apis/apps/v1/namespaces/deployment-6463/deployments/test-rolling-update-deployment 52c648d1-13c3-4f1d-bf3c-6d2e8164cbce 1056614 1 2021-09-03 13:55:15 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-09-03 13:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000558588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-09-03 13:55:15 +0000 UTC,LastTransitionTime:2021-09-03 13:55:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2021-09-03 13:55:21 +0000 UTC,LastTransitionTime:2021-09-03 13:55:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 3 13:55:21.468: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-6463 /apis/apps/v1/namespaces/deployment-6463/replicasets/test-rolling-update-deployment-c4cb8d6d9 89abb1e1-b8c6-4297-813d-b05706839fa0 1056605 1 2021-09-03 13:55:15 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 52c648d1-13c3-4f1d-bf3c-6d2e8164cbce 0xc000558b00 0xc000558b01}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52c648d1-13c3-4f1d-bf3c-6d2e8164cbce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000558b78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:55:21.468: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 3 13:55:21.468: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6463 /apis/apps/v1/namespaces/deployment-6463/replicasets/test-rolling-update-controller 9d1dcc95-ed6c-4a6f-b15f-c08d87b8d4fc 1056613 2 2021-09-03 13:55:08 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 52c648d1-13c3-4f1d-bf3c-6d2e8164cbce 0xc0005589f7 0xc0005589f8}] [] [{e2e.test Update apps/v1 2021-09-03 13:55:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52c648d1-13c3-4f1d-bf3c-6d2e8164cbce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000558a98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:55:21.471: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-748xx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-748xx test-rolling-update-deployment-c4cb8d6d9- deployment-6463 /api/v1/namespaces/deployment-6463/pods/test-rolling-update-deployment-c4cb8d6d9-748xx 86c4d559-7953-43c8-80cc-c131efbbc652 1056604 0 2021-09-03 13:55:15 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 89abb1e1-b8c6-4297-813d-b05706839fa0 0xc00296fe30 0xc00296fe31}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89abb1e1-b8c6-4297-813d-b05706839fa0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-09-03 13:55:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.152\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sd7qb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sd7qb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sd7qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:192.168.2.152,StartTime:2021-09-03 13:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-03 13:55:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://86cb57a6f19dbb2d462e3e675956b205e364904f4d03a82d70fe8cd92e071293,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:21.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6463" for this suite. • [SLOW TEST:13.091 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":25,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:19.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-63e80724-b9a3-4b7b-a1a0-f9afd90102fa STEP: Creating a pod to test consume configMaps Sep 3 13:55:20.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5" in namespace "configmap-4808" to be "Succeeded or Failed" Sep 3 13:55:20.025: INFO: Pod "pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83228ms Sep 3 13:55:22.119: INFO: Pod "pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096694184s Sep 3 13:55:24.321: INFO: Pod "pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298413835s Sep 3 13:55:26.418: INFO: Pod "pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395349729s STEP: Saw pod success Sep 3 13:55:26.418: INFO: Pod "pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5" satisfied condition "Succeeded or Failed" Sep 3 13:55:26.421: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5 container configmap-volume-test: STEP: delete the pod Sep 3 13:55:26.529: INFO: Waiting for pod pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5 to disappear Sep 3 13:55:26.532: INFO: Pod pod-configmaps-f5af5e4e-a486-4302-89c0-2214858510a5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:26.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4808" for this suite. • [SLOW TEST:6.639 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:18.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-83a0ed3f-af67-4475-8509-c71f3763823a STEP: Creating a pod to test consume configMaps Sep 3 13:55:18.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad" in namespace "configmap-3808" to be "Succeeded or Failed" Sep 3 13:55:18.562: INFO: Pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.562478ms Sep 3 13:55:20.565: INFO: Pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005957918s Sep 3 13:55:22.569: INFO: Pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009998924s Sep 3 13:55:24.620: INFO: Pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060993877s Sep 3 13:55:26.624: INFO: Pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064394787s STEP: Saw pod success Sep 3 13:55:26.624: INFO: Pod "pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad" satisfied condition "Succeeded or Failed" Sep 3 13:55:26.627: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad container configmap-volume-test: STEP: delete the pod Sep 3 13:55:26.724: INFO: Waiting for pod pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad to disappear Sep 3 13:55:26.818: INFO: Pod pod-configmaps-73bd3a1e-1035-4b4b-b8c5-09af9e3e18ad no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:26.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3808" for this suite. • [SLOW TEST:8.318 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":284,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:08.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 3 13:55:19.001: INFO: Successfully updated pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381" Sep 3 13:55:19.001: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381" in namespace "pods-3038" to be "terminated due to deadline exceeded" Sep 3 13:55:19.003: INFO: Pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381": Phase="Running", Reason="", readiness=true. Elapsed: 2.698217ms Sep 3 13:55:21.008: INFO: Pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381": Phase="Running", Reason="", readiness=true. Elapsed: 2.006879302s Sep 3 13:55:23.011: INFO: Pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381": Phase="Running", Reason="", readiness=true. Elapsed: 4.010360881s Sep 3 13:55:25.120: INFO: Pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381": Phase="Running", Reason="", readiness=true. Elapsed: 6.11936911s Sep 3 13:55:27.123: INFO: Pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 8.122546186s Sep 3 13:55:27.123: INFO: Pod "pod-update-activedeadlineseconds-482c3b7b-dd94-4247-933d-a3cc8a664381" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:27.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3038" for this suite. • [SLOW TEST:18.686 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:54:49.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 3 13:54:53.481: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-653 PodName:var-expansion-3a044fdd-9101-4423-830c-abcc6cf32aa6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:54:53.481: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Sep 3 13:54:53.608: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-653 PodName:var-expansion-3a044fdd-9101-4423-830c-abcc6cf32aa6 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:54:53.608: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Sep 3 13:54:54.190: INFO: Successfully updated pod "var-expansion-3a044fdd-9101-4423-830c-abcc6cf32aa6" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 3 13:54:54.192: INFO: Deleting pod "var-expansion-3a044fdd-9101-4423-830c-abcc6cf32aa6" in namespace "var-expansion-653" Sep 3 13:54:54.195: INFO: Wait up to 5m0s for pod "var-expansion-3a044fdd-9101-4423-830c-abcc6cf32aa6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:32.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-653" for this suite. • [SLOW TEST:42.773 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":30,"skipped":697,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:32.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:32.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3857" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":31,"skipped":704,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:15.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 3 13:55:15.443: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 3 13:55:15.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 create -f -' Sep 3 13:55:15.794: INFO: stderr: "" Sep 3 13:55:15.794: INFO: stdout: "service/agnhost-replica created\n" Sep 3 13:55:15.795: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 3 13:55:15.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 create -f -' Sep 3 13:55:16.061: INFO: stderr: "" Sep 3 13:55:16.061: INFO: stdout: "service/agnhost-primary created\n" Sep 3 13:55:16.062: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 3 13:55:16.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 create -f -' Sep 3 13:55:16.359: INFO: stderr: "" Sep 3 13:55:16.359: INFO: stdout: "service/frontend created\n" Sep 3 13:55:16.359: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 3 13:55:16.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 create -f -' Sep 3 13:55:16.648: INFO: stderr: "" Sep 3 13:55:16.648: INFO: stdout: "deployment.apps/frontend created\n" Sep 3 13:55:16.648: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 3 13:55:16.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 create -f -' Sep 3 13:55:16.908: INFO: stderr: "" Sep 3 13:55:16.908: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 3 13:55:16.909: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 3 13:55:16.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 create -f -' Sep 3 13:55:17.184: INFO: stderr: "" Sep 3 13:55:17.184: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 3 13:55:17.184: INFO: Waiting for all frontend pods to be Running. Sep 3 13:55:32.235: INFO: Waiting for frontend to serve content. Sep 3 13:55:32.244: INFO: Trying to add a new entry to the guestbook. Sep 3 13:55:32.258: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 3 13:55:32.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 delete --grace-period=0 --force -f -' Sep 3 13:55:32.398: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:55:32.398: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 3 13:55:32.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 delete --grace-period=0 --force -f -' Sep 3 13:55:32.527: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:55:32.527: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 3 13:55:32.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 delete --grace-period=0 --force -f -' Sep 3 13:55:32.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:55:32.659: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 3 13:55:32.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 delete --grace-period=0 --force -f -' Sep 3 13:55:32.785: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:55:32.785: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 3 13:55:32.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 delete --grace-period=0 --force -f -' Sep 3 13:55:32.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:55:32.912: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 3 13:55:32.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-711 delete --grace-period=0 --force -f -' Sep 3 13:55:33.046: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:55:33.046: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:33.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-711" for this suite. • [SLOW TEST:17.637 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":23,"skipped":386,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:26.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:55:27.538: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:55:29.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 3 13:55:31.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274127, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:55:34.564: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:34.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8718" for this suite. STEP: Destroying namespace "webhook-8718-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.757 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":15,"skipped":295,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:34.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:55:34.752: INFO: Creating deployment "test-recreate-deployment" Sep 3 13:55:34.757: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 3 13:55:34.763: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 3 13:55:36.771: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 3 13:55:36.774: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 3 13:55:36.784: INFO: Updating deployment test-recreate-deployment Sep 3 13:55:36.784: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 3 13:55:36.833: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4584 /apis/apps/v1/namespaces/deployment-4584/deployments/test-recreate-deployment d9cb8eb7-a7db-48f9-96b3-38dd65a2376a 1057127 2 2021-09-03 13:55:34 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-09-03 13:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-09-03 13:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b02568 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-09-03 13:55:36 +0000 UTC,LastTransitionTime:2021-09-03 13:55:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-09-03 13:55:36 +0000 UTC,LastTransitionTime:2021-09-03 13:55:34 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 3 13:55:36.836: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-4584 /apis/apps/v1/namespaces/deployment-4584/replicasets/test-recreate-deployment-f79dd4667 7b9d0832-45a4-4772-8393-a81cdfb4dd5e 1057125 1 2021-09-03 13:55:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d9cb8eb7-a7db-48f9-96b3-38dd65a2376a 0xc004b02ad0 0xc004b02ad1}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cb8eb7-a7db-48f9-96b3-38dd65a2376a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b02b48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:55:36.836: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 3 13:55:36.836: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-4584 /apis/apps/v1/namespaces/deployment-4584/replicasets/test-recreate-deployment-c96cf48f 04deb3b0-62da-4869-9ed6-ffa05dc3c000 1057116 2 2021-09-03 13:55:34 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d9cb8eb7-a7db-48f9-96b3-38dd65a2376a 0xc004b0299f 0xc004b029b0}] [] [{kube-controller-manager Update apps/v1 2021-09-03 13:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9cb8eb7-a7db-48f9-96b3-38dd65a2376a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b02a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 3 13:55:36.838: INFO: Pod "test-recreate-deployment-f79dd4667-76lvd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-76lvd test-recreate-deployment-f79dd4667- deployment-4584 /api/v1/namespaces/deployment-4584/pods/test-recreate-deployment-f79dd4667-76lvd cde7af3f-279a-481d-85ac-8a622794aa4f 1057123 0 2021-09-03 13:55:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 7b9d0832-45a4-4772-8393-a81cdfb4dd5e 0xc00454cf40 0xc00454cf41}] [] [{kube-controller-manager Update v1 2021-09-03 13:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9d0832-45a4-4772-8393-a81cdfb4dd5e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plbpk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plbpk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plbpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-kali-md-0-76b6798f7f-5n8xl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-03 13:55:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:36.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4584" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":16,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:33.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:55:33.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230" in namespace "projected-943" to be "Succeeded or Failed" Sep 3 13:55:33.099: INFO: Pod "downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991268ms Sep 3 13:55:35.102: INFO: Pod "downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006852431s Sep 3 13:55:37.107: INFO: Pod "downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011400779s STEP: Saw pod success Sep 3 13:55:37.107: INFO: Pod "downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230" satisfied condition "Succeeded or Failed" Sep 3 13:55:37.110: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230 container client-container: STEP: delete the pod Sep 3 13:55:37.127: INFO: Waiting for pod downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230 to disappear Sep 3 13:55:37.130: INFO: Pod downwardapi-volume-b40b3567-a00f-44a3-9eb1-61698034a230 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:37.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-943" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":390,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:26.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 3 13:55:38.941: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:38.942: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.317: INFO: Exec stderr: "" Sep 3 13:55:39.317: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.317: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.433: INFO: Exec stderr: "" Sep 3 13:55:39.433: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.433: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.541: INFO: Exec stderr: "" Sep 3 13:55:39.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.541: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.660: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 3 13:55:39.660: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.660: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.778: INFO: Exec stderr: "" Sep 3 13:55:39.778: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.778: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.905: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 3 13:55:39.905: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.905: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:39.990: INFO: Exec stderr: "" Sep 3 13:55:39.990: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:39.990: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:40.122: INFO: Exec stderr: "" Sep 3 13:55:40.122: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:40.122: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:40.324: INFO: Exec stderr: "" Sep 3 13:55:40.324: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8557 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:55:40.324: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:55:40.444: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:40.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8557" for this suite. • [SLOW TEST:13.773 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:40.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:40.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2298" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":764,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:37.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 3 13:55:43.716: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9982 pod-service-account-a529a229-63b2-4de5-ba67-fb90f20f3e80 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 3 13:55:44.159: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9982 pod-service-account-a529a229-63b2-4de5-ba67-fb90f20f3e80 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 3 13:55:44.400: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9982 pod-service-account-a529a229-63b2-4de5-ba67-fb90f20f3e80 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:44.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9982" for this suite. • [SLOW TEST:7.518 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":25,"skipped":391,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:40.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:55:40.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0" in namespace "projected-6090" to be "Succeeded or Failed" Sep 3 13:55:40.651: INFO: Pod "downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.780176ms Sep 3 13:55:42.656: INFO: Pod "downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006892178s Sep 3 13:55:44.659: INFO: Pod "downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010127808s STEP: Saw pod success Sep 3 13:55:44.659: INFO: Pod "downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0" satisfied condition "Succeeded or Failed" Sep 3 13:55:44.662: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0 container client-container: STEP: delete the pod Sep 3 13:55:44.675: INFO: Waiting for pod downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0 to disappear Sep 3 13:55:44.678: INFO: Pod downwardapi-volume-7f425ae2-e8e6-4efd-aed4-9451989ff4b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:44.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6090" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":774,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:44.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:55:44.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea" in namespace "downward-api-9258" to be "Succeeded or Failed" Sep 3 13:55:44.716: INFO: Pod "downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88035ms Sep 3 13:55:46.720: INFO: Pod "downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00698715s STEP: Saw pod success Sep 3 13:55:46.720: INFO: Pod "downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea" satisfied condition "Succeeded or Failed" Sep 3 13:55:46.724: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea container client-container: STEP: delete the pod Sep 3 13:55:46.741: INFO: Waiting for pod downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea to disappear Sep 3 13:55:46.744: INFO: Pod downwardapi-volume-89756ac4-e1ae-44bf-97bc-ce53dcd2f5ea no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:46.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9258" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":399,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:44.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 3 13:55:44.754: INFO: Waiting up to 5m0s for pod "client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9" in namespace "containers-8140" to be "Succeeded or Failed" Sep 3 13:55:44.756: INFO: Pod "client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.522131ms Sep 3 13:55:46.759: INFO: Pod "client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005650281s STEP: Saw pod success Sep 3 13:55:46.760: INFO: Pod "client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9" satisfied condition "Succeeded or Failed" Sep 3 13:55:46.762: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9 container test-container: STEP: delete the pod Sep 3 13:55:46.776: INFO: Waiting for pod client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9 to disappear Sep 3 13:55:46.778: INFO: Pod client-containers-e73ea44d-d0be-4270-9ff8-2d850aa456a9 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:46.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8140" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":794,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:36.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:57.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1241" for this suite. • [SLOW TEST:20.287 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":340,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:21.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2753 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2753;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2753 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2753;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2753.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2753.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2753.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2753.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2753.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2753.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2753.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2753.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 105.63.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.63.105_udp@PTR;check="$$(dig +tcp +noall +answer +search 105.63.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.63.105_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2753 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2753;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2753 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2753;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2753.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2753.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2753.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2753.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2753.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2753.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2753.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2753.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2753.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 105.63.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.63.105_udp@PTR;check="$$(dig +tcp +noall +answer +search 105.63.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.63.105_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:55:27.595: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.599: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.603: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.607: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.624: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.651: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.656: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.660: INFO: Unable to read jessie_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.663: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.667: INFO: Unable to read jessie_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.670: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.677: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:27.700: INFO: Lookups using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2753 wheezy_tcp@dns-test-service.dns-2753 wheezy_udp@dns-test-service.dns-2753.svc wheezy_tcp@dns-test-service.dns-2753.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2753.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2753 jessie_tcp@dns-test-service.dns-2753 jessie_udp@dns-test-service.dns-2753.svc jessie_tcp@dns-test-service.dns-2753.svc jessie_tcp@_http._tcp.dns-test-service.dns-2753.svc] Sep 3 13:55:32.705: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.710: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.759: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.762: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.765: INFO: Unable to read jessie_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.769: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.772: INFO: Unable to read jessie_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.775: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:32.804: INFO: Lookups using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2753 wheezy_tcp@dns-test-service.dns-2753 wheezy_udp@dns-test-service.dns-2753.svc wheezy_tcp@dns-test-service.dns-2753.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2753 jessie_tcp@dns-test-service.dns-2753 jessie_udp@dns-test-service.dns-2753.svc jessie_tcp@dns-test-service.dns-2753.svc] Sep 3 13:55:37.704: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.708: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.711: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.715: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.718: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.721: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.755: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.760: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.764: INFO: Unable to read jessie_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.767: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.771: INFO: Unable to read jessie_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.775: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:37.805: INFO: Lookups using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2753 wheezy_tcp@dns-test-service.dns-2753 wheezy_udp@dns-test-service.dns-2753.svc wheezy_tcp@dns-test-service.dns-2753.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2753 jessie_tcp@dns-test-service.dns-2753 jessie_udp@dns-test-service.dns-2753.svc jessie_tcp@dns-test-service.dns-2753.svc] Sep 3 13:55:42.705: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.709: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.713: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.725: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.761: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.764: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.768: INFO: Unable to read jessie_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.772: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.777: INFO: Unable to read jessie_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.781: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:42.809: INFO: Lookups using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2753 wheezy_tcp@dns-test-service.dns-2753 wheezy_udp@dns-test-service.dns-2753.svc wheezy_tcp@dns-test-service.dns-2753.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2753 jessie_tcp@dns-test-service.dns-2753 jessie_udp@dns-test-service.dns-2753.svc jessie_tcp@dns-test-service.dns-2753.svc] Sep 3 13:55:47.705: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.709: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.713: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.720: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.757: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.761: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.765: INFO: Unable to read jessie_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.772: INFO: Unable to read jessie_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.775: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:47.804: INFO: Lookups using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2753 wheezy_tcp@dns-test-service.dns-2753 wheezy_udp@dns-test-service.dns-2753.svc wheezy_tcp@dns-test-service.dns-2753.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2753 jessie_tcp@dns-test-service.dns-2753 jessie_udp@dns-test-service.dns-2753.svc jessie_tcp@dns-test-service.dns-2753.svc] Sep 3 13:55:52.705: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.709: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.712: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.716: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.720: INFO: Unable to read wheezy_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.823: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.828: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.831: INFO: Unable to read jessie_udp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753 from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.840: INFO: Unable to read jessie_udp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-2753.svc from pod dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454: the server could not find the requested resource (get pods dns-test-64842559-eb4e-4d21-91f9-15afa4816454) Sep 3 13:55:52.876: INFO: Lookups using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2753 wheezy_tcp@dns-test-service.dns-2753 wheezy_udp@dns-test-service.dns-2753.svc wheezy_tcp@dns-test-service.dns-2753.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2753 jessie_tcp@dns-test-service.dns-2753 jessie_udp@dns-test-service.dns-2753.svc jessie_tcp@dns-test-service.dns-2753.svc] Sep 3 13:55:57.812: INFO: DNS probes using dns-2753/dns-test-64842559-eb4e-4d21-91f9-15afa4816454 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:57.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2753" for this suite. • [SLOW TEST:36.319 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":355,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:57.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 3 13:55:57.908: INFO: Waiting up to 5m0s for pod "client-containers-99e7d433-0221-44f2-8cb5-7de930d70241" in namespace "containers-1680" to be "Succeeded or Failed" Sep 3 13:55:57.911: INFO: Pod "client-containers-99e7d433-0221-44f2-8cb5-7de930d70241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703261ms Sep 3 13:55:59.914: INFO: Pod "client-containers-99e7d433-0221-44f2-8cb5-7de930d70241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006252979s STEP: Saw pod success Sep 3 13:55:59.914: INFO: Pod "client-containers-99e7d433-0221-44f2-8cb5-7de930d70241" satisfied condition "Succeeded or Failed" Sep 3 13:55:59.917: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod client-containers-99e7d433-0221-44f2-8cb5-7de930d70241 container test-container: STEP: delete the pod Sep 3 13:55:59.931: INFO: Waiting for pod client-containers-99e7d433-0221-44f2-8cb5-7de930d70241 to disappear Sep 3 13:55:59.933: INFO: Pod client-containers-99e7d433-0221-44f2-8cb5-7de930d70241 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:55:59.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1680" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:46.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6156 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6156 STEP: Creating statefulset with conflicting port in namespace statefulset-6156 STEP: Waiting until pod test-pod will start running in namespace statefulset-6156 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6156 Sep 3 13:55:50.856: INFO: Observed stateful pod in namespace: statefulset-6156, name: ss-0, uid: c8095e0d-dc1f-4631-ad66-5bd8f8a36e0d, status phase: Pending. Waiting for statefulset controller to delete. Sep 3 13:55:50.989: INFO: Observed stateful pod in namespace: statefulset-6156, name: ss-0, uid: c8095e0d-dc1f-4631-ad66-5bd8f8a36e0d, status phase: Failed. Waiting for statefulset controller to delete. Sep 3 13:55:50.996: INFO: Observed stateful pod in namespace: statefulset-6156, name: ss-0, uid: c8095e0d-dc1f-4631-ad66-5bd8f8a36e0d, status phase: Failed. Waiting for statefulset controller to delete. Sep 3 13:55:50.999: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6156 STEP: Removing pod with conflicting port in namespace statefulset-6156 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6156 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 13:55:55.016: INFO: Deleting all statefulset in ns statefulset-6156 Sep 3 13:55:55.020: INFO: Scaling statefulset ss to 0 Sep 3 13:56:05.036: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:56:05.039: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:05.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6156" for this suite. • [SLOW TEST:18.262 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":35,"skipped":797,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:32.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8601.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8601.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8601.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 3 13:55:36.333: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.337: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.341: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.345: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.357: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.361: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.365: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.369: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:36.375: INFO: Lookups using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local] Sep 3 13:55:41.418: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.422: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.427: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.431: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.442: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.445: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.449: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.453: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:41.460: INFO: Lookups using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local] Sep 3 13:55:46.380: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.384: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.388: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.392: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.405: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.409: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.413: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.417: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:46.424: INFO: Lookups using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local] Sep 3 13:55:51.381: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.385: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.389: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.392: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.404: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.408: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.412: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.416: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:51.424: INFO: Lookups using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local] Sep 3 13:55:56.380: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.384: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.387: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.391: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.402: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.406: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.409: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.413: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:55:56.420: INFO: Lookups using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local] Sep 3 13:56:01.380: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.384: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.387: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.390: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.401: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.404: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.407: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.410: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local from pod dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4: the server could not find the requested resource (get pods dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4) Sep 3 13:56:01.417: INFO: Lookups using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8601.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8601.svc.cluster.local jessie_udp@dns-test-service-2.dns-8601.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8601.svc.cluster.local] Sep 3 13:56:06.421: INFO: DNS probes using dns-8601/dns-test-0cdd78d6-d383-469a-a50d-84c12e5008c4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:06.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8601" for this suite. • [SLOW TEST:34.162 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":32,"skipped":706,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:06.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 3 13:56:06.485: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7984 proxy --unix-socket=/tmp/kubectl-proxy-unix378762617/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:06.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7984" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":33,"skipped":710,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:00.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8845 STEP: creating service affinity-nodeport in namespace services-8845 STEP: creating replication controller affinity-nodeport in namespace services-8845 I0903 13:56:00.071500 28 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8845, replica count: 3 I0903 13:56:03.122088 28 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:56:03.133: INFO: Creating new exec pod Sep 3 13:56:06.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8845 exec execpod-affinitytp28r -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 3 13:56:06.396: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Sep 3 13:56:06.396: INFO: stdout: "" Sep 3 13:56:06.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8845 exec execpod-affinitytp28r -- /bin/sh -x -c nc -zv -t -w 2 10.129.59.1 80' Sep 3 13:56:06.611: INFO: stderr: "+ nc -zv -t -w 2 10.129.59.1 80\nConnection to 10.129.59.1 80 port [tcp/http] succeeded!\n" Sep 3 13:56:06.611: INFO: stdout: "" Sep 3 13:56:06.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8845 exec execpod-affinitytp28r -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 32107' Sep 3 13:56:06.853: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.9 32107\nConnection to 172.18.0.9 32107 port [tcp/32107] succeeded!\n" Sep 3 13:56:06.853: INFO: stdout: "" Sep 3 13:56:06.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8845 exec execpod-affinitytp28r -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32107' Sep 3 13:56:07.093: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 32107\nConnection to 172.18.0.10 32107 port [tcp/32107] succeeded!\n" Sep 3 13:56:07.093: INFO: stdout: "" Sep 3 13:56:07.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8845 exec execpod-affinitytp28r -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.9:32107/ ; done' Sep 3 13:56:07.459: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32107/\n" Sep 3 13:56:07.459: INFO: stdout: "\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r\naffinity-nodeport-zg88r" Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.459: INFO: Received response from host: affinity-nodeport-zg88r Sep 3 13:56:07.460: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8845, will wait for the garbage collector to delete the pods Sep 3 13:56:07.530: INFO: Deleting ReplicationController affinity-nodeport took: 6.131349ms Sep 3 13:56:07.630: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.332559ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:13.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8845" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:13.725 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:57.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2009 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2009 STEP: creating replication controller externalsvc in namespace services-2009 I0903 13:55:57.251594 19 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2009, replica count: 2 I0903 13:56:00.302209 19 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 3 13:56:00.319: INFO: Creating new exec pod Sep 3 13:56:02.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2009 exec execpod59wmg -- /bin/sh -x -c nslookup clusterip-service.services-2009.svc.cluster.local' Sep 3 13:56:02.641: INFO: stderr: "+ nslookup clusterip-service.services-2009.svc.cluster.local\n" Sep 3 13:56:02.641: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nclusterip-service.services-2009.svc.cluster.local\tcanonical name = externalsvc.services-2009.svc.cluster.local.\nName:\texternalsvc.services-2009.svc.cluster.local\nAddress: 10.134.200.94\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2009, will wait for the garbage collector to delete the pods Sep 3 13:56:02.701: INFO: Deleting ReplicationController externalsvc took: 6.485605ms Sep 3 13:56:02.802: INFO: Terminating ReplicationController externalsvc pods took: 100.363955ms Sep 3 13:56:13.816: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:13.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2009" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:16.633 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":18,"skipped":341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:13.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f6d87ad6-37d3-4daa-a83f-e9c2052e05cd STEP: Creating a pod to test consume secrets Sep 3 13:56:13.878: INFO: Waiting up to 5m0s for pod "pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c" in namespace "secrets-2366" to be "Succeeded or Failed" Sep 3 13:56:13.881: INFO: Pod "pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972965ms Sep 3 13:56:15.885: INFO: Pod "pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006859259s STEP: Saw pod success Sep 3 13:56:15.885: INFO: Pod "pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c" satisfied condition "Succeeded or Failed" Sep 3 13:56:15.888: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c container secret-volume-test: STEP: delete the pod Sep 3 13:56:15.900: INFO: Waiting for pod pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c to disappear Sep 3 13:56:15.902: INFO: Pod pod-secrets-610c8fbf-2719-4f55-a4bd-a068a95be21c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:15.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2366" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:13.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5e07e20f-b033-4922-af3b-53a525ab0bb8 STEP: Creating a pod to test consume configMaps Sep 3 13:56:13.974: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e" in namespace "projected-7055" to be "Succeeded or Failed" Sep 3 13:56:13.977: INFO: Pod "pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14362ms Sep 3 13:56:15.980: INFO: Pod "pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00618131s STEP: Saw pod success Sep 3 13:56:15.980: INFO: Pod "pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e" satisfied condition "Succeeded or Failed" Sep 3 13:56:15.983: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e container projected-configmap-volume-test: STEP: delete the pod Sep 3 13:56:15.996: INFO: Waiting for pod pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e to disappear Sep 3 13:56:15.998: INFO: Pod pod-projected-configmaps-d0c8c3fe-b1bd-45bf-af1f-7882185d6a7e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:15.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7055" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":518,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:15.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 3 13:56:16.004: INFO: created test-event-1 Sep 3 13:56:16.007: INFO: created test-event-2 Sep 3 13:56:16.010: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 3 13:56:16.012: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 3 13:56:16.024: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:16.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9298" for this suite. •S ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":20,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:16.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:17.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1474" for this suite. • [SLOW TEST:60.050 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:16.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:56:16.620: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:56:19.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:19.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3150" for this suite. STEP: Destroying namespace "webhook-3150-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":21,"skipped":398,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:19.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:56:19.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded" in namespace "downward-api-9907" to be "Succeeded or Failed" Sep 3 13:56:19.813: INFO: Pod "downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722603ms Sep 3 13:56:21.817: INFO: Pod "downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006958473s STEP: Saw pod success Sep 3 13:56:21.817: INFO: Pod "downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded" satisfied condition "Succeeded or Failed" Sep 3 13:56:21.820: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded container client-container: STEP: delete the pod Sep 3 13:56:21.835: INFO: Waiting for pod downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded to disappear Sep 3 13:56:21.838: INFO: Pod downwardapi-volume-8871bb88-50e4-4de8-832d-2c49c6284ded no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:21.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9907" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:21.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:21.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2524" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":23,"skipped":452,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:16.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Sep 3 13:56:17.080: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:56:17.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:56:19.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274177, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274177, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274177, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274177, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:56:22.114: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:22.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8649" for this suite. STEP: Destroying namespace "webhook-8649-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.152 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":30,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:22.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 3 13:56:22.332: INFO: Waiting up to 5m0s for pod "pod-81552837-b85b-4e80-ad24-4683b206313e" in namespace "emptydir-2592" to be "Succeeded or Failed" Sep 3 13:56:22.335: INFO: Pod "pod-81552837-b85b-4e80-ad24-4683b206313e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023544ms Sep 3 13:56:24.339: INFO: Pod "pod-81552837-b85b-4e80-ad24-4683b206313e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006944898s STEP: Saw pod success Sep 3 13:56:24.339: INFO: Pod "pod-81552837-b85b-4e80-ad24-4683b206313e" satisfied condition "Succeeded or Failed" Sep 3 13:56:24.342: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-81552837-b85b-4e80-ad24-4683b206313e container test-container: STEP: delete the pod Sep 3 13:56:24.356: INFO: Waiting for pod pod-81552837-b85b-4e80-ad24-4683b206313e to disappear Sep 3 13:56:24.359: INFO: Pod pod-81552837-b85b-4e80-ad24-4683b206313e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:24.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2592" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":600,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:24.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-6607/secret-test-30b4a389-9fd7-400b-a69d-ac29f988c52f STEP: Creating a pod to test consume secrets Sep 3 13:56:24.415: INFO: Waiting up to 5m0s for pod "pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d" in namespace "secrets-6607" to be "Succeeded or Failed" Sep 3 13:56:24.418: INFO: Pod "pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.967422ms Sep 3 13:56:26.422: INFO: Pod "pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007047469s STEP: Saw pod success Sep 3 13:56:26.422: INFO: Pod "pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d" satisfied condition "Succeeded or Failed" Sep 3 13:56:26.715: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d container env-test: STEP: delete the pod Sep 3 13:56:27.319: INFO: Waiting for pod pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d to disappear Sep 3 13:56:27.815: INFO: Pod pod-configmaps-96723067-35aa-4896-a595-fbb1397ed97d no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:27.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6607" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":604,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:27.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:28.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4380" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":33,"skipped":622,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:06.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 3 13:56:06.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 create -f -' Sep 3 13:56:06.879: INFO: stderr: "" Sep 3 13:56:06.879: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 3 13:56:06.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:07.004: INFO: stderr: "" Sep 3 13:56:07.004: INFO: stdout: "update-demo-nautilus-7qsbg update-demo-nautilus-rgbpx " Sep 3 13:56:07.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-7qsbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:07.144: INFO: stderr: "" Sep 3 13:56:07.144: INFO: stdout: "" Sep 3 13:56:07.144: INFO: update-demo-nautilus-7qsbg is created but not running Sep 3 13:56:12.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:12.280: INFO: stderr: "" Sep 3 13:56:12.280: INFO: stdout: "update-demo-nautilus-7qsbg update-demo-nautilus-rgbpx " Sep 3 13:56:12.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-7qsbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:12.394: INFO: stderr: "" Sep 3 13:56:12.394: INFO: stdout: "true" Sep 3 13:56:12.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-7qsbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:12.511: INFO: stderr: "" Sep 3 13:56:12.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:12.511: INFO: validating pod update-demo-nautilus-7qsbg Sep 3 13:56:12.522: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:12.522: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:12.522: INFO: update-demo-nautilus-7qsbg is verified up and running Sep 3 13:56:12.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:12.643: INFO: stderr: "" Sep 3 13:56:12.643: INFO: stdout: "true" Sep 3 13:56:12.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:12.760: INFO: stderr: "" Sep 3 13:56:12.760: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:12.760: INFO: validating pod update-demo-nautilus-rgbpx Sep 3 13:56:12.770: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:12.770: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:12.770: INFO: update-demo-nautilus-rgbpx is verified up and running STEP: scaling down the replication controller Sep 3 13:56:12.772: INFO: scanned /root for discovery docs: Sep 3 13:56:12.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Sep 3 13:56:13.913: INFO: stderr: "" Sep 3 13:56:13.913: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 3 13:56:13.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:14.048: INFO: stderr: "" Sep 3 13:56:14.048: INFO: stdout: "update-demo-nautilus-7qsbg update-demo-nautilus-rgbpx " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 3 13:56:19.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:19.175: INFO: stderr: "" Sep 3 13:56:19.175: INFO: stdout: "update-demo-nautilus-rgbpx " Sep 3 13:56:19.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:19.288: INFO: stderr: "" Sep 3 13:56:19.288: INFO: stdout: "true" Sep 3 13:56:19.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:19.405: INFO: stderr: "" Sep 3 13:56:19.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:19.405: INFO: validating pod update-demo-nautilus-rgbpx Sep 3 13:56:19.408: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:19.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:19.408: INFO: update-demo-nautilus-rgbpx is verified up and running STEP: scaling up the replication controller Sep 3 13:56:19.412: INFO: scanned /root for discovery docs: Sep 3 13:56:19.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Sep 3 13:56:20.549: INFO: stderr: "" Sep 3 13:56:20.549: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 3 13:56:20.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:20.672: INFO: stderr: "" Sep 3 13:56:20.673: INFO: stdout: "update-demo-nautilus-rgbpx update-demo-nautilus-v57kn " Sep 3 13:56:20.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:20.797: INFO: stderr: "" Sep 3 13:56:20.797: INFO: stdout: "true" Sep 3 13:56:20.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:20.916: INFO: stderr: "" Sep 3 13:56:20.916: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:20.916: INFO: validating pod update-demo-nautilus-rgbpx Sep 3 13:56:20.920: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:20.920: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:20.920: INFO: update-demo-nautilus-rgbpx is verified up and running Sep 3 13:56:20.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-v57kn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:21.044: INFO: stderr: "" Sep 3 13:56:21.045: INFO: stdout: "" Sep 3 13:56:21.045: INFO: update-demo-nautilus-v57kn is created but not running Sep 3 13:56:26.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:26.432: INFO: stderr: "" Sep 3 13:56:26.432: INFO: stdout: "update-demo-nautilus-rgbpx update-demo-nautilus-v57kn " Sep 3 13:56:26.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:26.725: INFO: stderr: "" Sep 3 13:56:26.725: INFO: stdout: "true" Sep 3 13:56:26.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-rgbpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:27.030: INFO: stderr: "" Sep 3 13:56:27.030: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:27.030: INFO: validating pod update-demo-nautilus-rgbpx Sep 3 13:56:27.319: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:27.319: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:27.319: INFO: update-demo-nautilus-rgbpx is verified up and running Sep 3 13:56:27.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-v57kn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:27.825: INFO: stderr: "" Sep 3 13:56:27.825: INFO: stdout: "true" Sep 3 13:56:27.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods update-demo-nautilus-v57kn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:28.025: INFO: stderr: "" Sep 3 13:56:28.025: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:28.025: INFO: validating pod update-demo-nautilus-v57kn Sep 3 13:56:28.117: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:28.117: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:28.117: INFO: update-demo-nautilus-v57kn is verified up and running STEP: using delete to clean up resources Sep 3 13:56:28.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 delete --grace-period=0 --force -f -' Sep 3 13:56:28.624: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:56:28.625: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 3 13:56:28.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get rc,svc -l name=update-demo --no-headers' Sep 3 13:56:28.935: INFO: stderr: "No resources found in kubectl-8775 namespace.\n" Sep 3 13:56:28.935: INFO: stdout: "" Sep 3 13:56:28.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8775 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 3 13:56:29.074: INFO: stderr: "" Sep 3 13:56:29.074: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:29.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8775" for this suite. • [SLOW TEST:22.492 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":34,"skipped":715,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:21.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:29.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9197" for this suite. • [SLOW TEST:7.356 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":24,"skipped":454,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:52:27.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-1f061a87-9a24-4894-ad39-c307a17cd792 in namespace container-probe-2567 Sep 3 13:52:29.130: INFO: Started pod busybox-1f061a87-9a24-4894-ad39-c307a17cd792 in namespace container-probe-2567 STEP: checking the pod's current state and verifying that restartCount is present Sep 3 13:52:29.133: INFO: Initial restart count of pod busybox-1f061a87-9a24-4894-ad39-c307a17cd792 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:30.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2567" for this suite. • [SLOW TEST:243.852 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:28.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 3 13:56:31.021: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:31.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9256" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":642,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:29.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-71ee676b-673c-42fa-9ee3-dc0177691d2c STEP: Creating a pod to test consume configMaps Sep 3 13:56:29.382: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9" in namespace "configmap-5267" to be "Succeeded or Failed" Sep 3 13:56:29.384: INFO: Pod "pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221625ms Sep 3 13:56:31.388: INFO: Pod "pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006003242s Sep 3 13:56:33.392: INFO: Pod "pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009910735s STEP: Saw pod success Sep 3 13:56:33.392: INFO: Pod "pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9" satisfied condition "Succeeded or Failed" Sep 3 13:56:33.395: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9 container configmap-volume-test: STEP: delete the pod Sep 3 13:56:33.411: INFO: Waiting for pod pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9 to disappear Sep 3 13:56:33.414: INFO: Pod pod-configmaps-ae5e1f1d-01de-444c-ab49-e8a18e104be9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:33.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5267" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:33.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 3 13:56:33.485: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2884 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:33.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2884" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":26,"skipped":483,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:21.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-21dd0ebb-2c49-49ab-a84e-f9f08b1969a2 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-21dd0ebb-2c49-49ab-a84e-f9f08b1969a2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:33.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2975" for this suite. • [SLOW TEST:72.379 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":447,"failed":0} [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:17.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 3 13:56:23.112: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:23.116: INFO: Pod pod-with-prestop-http-hook still exists Sep 3 13:56:25.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:25.121: INFO: Pod pod-with-prestop-http-hook still exists Sep 3 13:56:27.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:27.319: INFO: Pod pod-with-prestop-http-hook still exists Sep 3 13:56:29.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:29.120: INFO: Pod pod-with-prestop-http-hook still exists Sep 3 13:56:31.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:31.119: INFO: Pod pod-with-prestop-http-hook still exists Sep 3 13:56:33.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:33.120: INFO: Pod pod-with-prestop-http-hook still exists Sep 3 13:56:35.116: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 3 13:56:35.122: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:35.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4011" for this suite. • [SLOW TEST:18.091 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":447,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:29.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 3 13:56:29.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 create -f -' Sep 3 13:56:29.370: INFO: stderr: "" Sep 3 13:56:29.370: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 3 13:56:29.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:29.489: INFO: stderr: "" Sep 3 13:56:29.489: INFO: stdout: "update-demo-nautilus-468jw update-demo-nautilus-8mqgd " Sep 3 13:56:29.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods update-demo-nautilus-468jw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:29.612: INFO: stderr: "" Sep 3 13:56:29.612: INFO: stdout: "" Sep 3 13:56:29.612: INFO: update-demo-nautilus-468jw is created but not running Sep 3 13:56:34.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Sep 3 13:56:34.835: INFO: stderr: "" Sep 3 13:56:34.835: INFO: stdout: "update-demo-nautilus-468jw update-demo-nautilus-8mqgd " Sep 3 13:56:34.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods update-demo-nautilus-468jw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:34.962: INFO: stderr: "" Sep 3 13:56:34.962: INFO: stdout: "true" Sep 3 13:56:34.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods update-demo-nautilus-468jw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:35.131: INFO: stderr: "" Sep 3 13:56:35.131: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:35.131: INFO: validating pod update-demo-nautilus-468jw Sep 3 13:56:35.136: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:35.136: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:35.136: INFO: update-demo-nautilus-468jw is verified up and running Sep 3 13:56:35.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods update-demo-nautilus-8mqgd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Sep 3 13:56:35.331: INFO: stderr: "" Sep 3 13:56:35.331: INFO: stdout: "true" Sep 3 13:56:35.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods update-demo-nautilus-8mqgd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Sep 3 13:56:35.453: INFO: stderr: "" Sep 3 13:56:35.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 3 13:56:35.454: INFO: validating pod update-demo-nautilus-8mqgd Sep 3 13:56:35.459: INFO: got data: { "image": "nautilus.jpg" } Sep 3 13:56:35.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 3 13:56:35.460: INFO: update-demo-nautilus-8mqgd is verified up and running STEP: using delete to clean up resources Sep 3 13:56:35.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 delete --grace-period=0 --force -f -' Sep 3 13:56:35.579: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 3 13:56:35.579: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 3 13:56:35.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get rc,svc -l name=update-demo --no-headers' Sep 3 13:56:35.702: INFO: stderr: "No resources found in kubectl-5435 namespace.\n" Sep 3 13:56:35.702: INFO: stdout: "" Sep 3 13:56:35.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 3 13:56:35.823: INFO: stderr: "" Sep 3 13:56:35.823: INFO: stdout: "update-demo-nautilus-468jw\nupdate-demo-nautilus-8mqgd\n" Sep 3 13:56:36.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get rc,svc -l name=update-demo --no-headers' Sep 3 13:56:36.446: INFO: stderr: "No resources found in kubectl-5435 namespace.\n" Sep 3 13:56:36.446: INFO: stdout: "" Sep 3 13:56:36.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5435 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 3 13:56:36.579: INFO: stderr: "" Sep 3 13:56:36.579: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5435" for this suite. • [SLOW TEST:7.503 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":35,"skipped":716,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:31.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:56:31.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df" in namespace "downward-api-8227" to be "Succeeded or Failed" Sep 3 13:56:31.083: INFO: Pod "downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df": Phase="Pending", Reason="", readiness=false. Elapsed: 1.979919ms Sep 3 13:56:33.086: INFO: Pod "downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00532889s Sep 3 13:56:35.121: INFO: Pod "downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040843833s Sep 3 13:56:37.126: INFO: Pod "downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045743936s STEP: Saw pod success Sep 3 13:56:37.126: INFO: Pod "downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df" satisfied condition "Succeeded or Failed" Sep 3 13:56:37.129: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df container client-container: STEP: delete the pod Sep 3 13:56:37.142: INFO: Waiting for pod downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df to disappear Sep 3 13:56:37.145: INFO: Pod downwardapi-volume-94976f60-7547-4703-9363-de8c844dd3df no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:37.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8227" for this suite. • [SLOW TEST:6.105 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:11.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-3a8e7e2c-2af0-4794-9eb2-0df3e4461082 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3a8e7e2c-2af0-4794-9eb2-0df3e4461082 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:38.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3550" for this suite. • [SLOW TEST:87.130 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":248,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:27.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0903 13:55:37.189105 34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 3 13:56:39.209: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:39.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1553" for this suite. • [SLOW TEST:72.083 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":14,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":473,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:33.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 3 13:56:33.897: INFO: Waiting up to 5m0s for pod "pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2" in namespace "emptydir-4906" to be "Succeeded or Failed" Sep 3 13:56:33.900: INFO: Pod "pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865209ms Sep 3 13:56:35.903: INFO: Pod "pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005881063s Sep 3 13:56:37.906: INFO: Pod "pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00914572s Sep 3 13:56:39.910: INFO: Pod "pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01327169s STEP: Saw pod success Sep 3 13:56:39.910: INFO: Pod "pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2" satisfied condition "Succeeded or Failed" Sep 3 13:56:39.914: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2 container test-container: STEP: delete the pod Sep 3 13:56:39.929: INFO: Waiting for pod pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2 to disappear Sep 3 13:56:39.932: INFO: Pod pod-94a6abbd-a510-4650-8d02-0c1cb2ce0eb2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:39.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4906" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:33.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 3 13:56:33.663: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 3 13:56:33.668: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 3 13:56:33.668: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 3 13:56:33.674: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 3 13:56:33.674: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 3 13:56:33.683: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 3 13:56:33.683: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 3 13:56:40.711: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:40.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2474" for this suite. • [SLOW TEST:7.102 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":27,"skipped":489,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:38.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 3 13:56:38.786: INFO: Waiting up to 5m0s for pod "pod-e501e44d-2518-4896-a1e6-fea71db965e8" in namespace "emptydir-2891" to be "Succeeded or Failed" Sep 3 13:56:38.788: INFO: Pod "pod-e501e44d-2518-4896-a1e6-fea71db965e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.668028ms Sep 3 13:56:40.793: INFO: Pod "pod-e501e44d-2518-4896-a1e6-fea71db965e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006738917s Sep 3 13:56:42.796: INFO: Pod "pod-e501e44d-2518-4896-a1e6-fea71db965e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010459483s STEP: Saw pod success Sep 3 13:56:42.796: INFO: Pod "pod-e501e44d-2518-4896-a1e6-fea71db965e8" satisfied condition "Succeeded or Failed" Sep 3 13:56:42.799: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-e501e44d-2518-4896-a1e6-fea71db965e8 container test-container: STEP: delete the pod Sep 3 13:56:42.813: INFO: Waiting for pod pod-e501e44d-2518-4896-a1e6-fea71db965e8 to disappear Sep 3 13:56:42.815: INFO: Pod pod-e501e44d-2518-4896-a1e6-fea71db965e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:42.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2891" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":363,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:39.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 3 13:56:39.340: INFO: namespace kubectl-2649 Sep 3 13:56:39.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2649 create -f -' Sep 3 13:56:39.623: INFO: stderr: "" Sep 3 13:56:39.623: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 3 13:56:40.628: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:56:40.628: INFO: Found 0 / 1 Sep 3 13:56:41.626: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:56:41.626: INFO: Found 1 / 1 Sep 3 13:56:41.626: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 3 13:56:41.629: INFO: Selector matched 1 pods for map[app:agnhost] Sep 3 13:56:41.629: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 3 13:56:41.629: INFO: wait on agnhost-primary startup in kubectl-2649 Sep 3 13:56:41.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2649 logs agnhost-primary-gm6b9 agnhost-primary' Sep 3 13:56:41.775: INFO: stderr: "" Sep 3 13:56:41.775: INFO: stdout: "Paused\n" STEP: exposing RC Sep 3 13:56:41.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2649 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Sep 3 13:56:41.921: INFO: stderr: "" Sep 3 13:56:41.921: INFO: stdout: "service/rm2 exposed\n" Sep 3 13:56:41.924: INFO: Service rm2 in namespace kubectl-2649 found. STEP: exposing service Sep 3 13:56:43.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2649 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Sep 3 13:56:44.078: INFO: stderr: "" Sep 3 13:56:44.078: INFO: stdout: "service/rm3 exposed\n" Sep 3 13:56:44.081: INFO: Service rm3 in namespace kubectl-2649 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:46.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2649" for this suite. • [SLOW TEST:6.829 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1222 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":15,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:46.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:46.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5749" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":16,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:42.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:46.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-350" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":376,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:37.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:48.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1387" for this suite. • [SLOW TEST:11.076 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":36,"skipped":683,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:46.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 3 13:56:46.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1" in namespace "projected-9305" to be "Succeeded or Failed" Sep 3 13:56:46.319: INFO: Pod "downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930218ms Sep 3 13:56:48.323: INFO: Pod "downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006291462s Sep 3 13:56:50.327: INFO: Pod "downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010379949s STEP: Saw pod success Sep 3 13:56:50.327: INFO: Pod "downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1" satisfied condition "Succeeded or Failed" Sep 3 13:56:50.330: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1 container client-container: STEP: delete the pod Sep 3 13:56:50.346: INFO: Waiting for pod downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1 to disappear Sep 3 13:56:50.349: INFO: Pod downwardapi-volume-28a5f242-bea5-4b3d-8046-033cb79c8ee1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:50.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9305" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":356,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:48.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 3 13:56:48.341: INFO: Waiting up to 5m0s for pod "pod-707263b8-e64d-4e88-bedb-6bf59fac0689" in namespace "emptydir-6465" to be "Succeeded or Failed" Sep 3 13:56:48.345: INFO: Pod "pod-707263b8-e64d-4e88-bedb-6bf59fac0689": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258998ms Sep 3 13:56:50.348: INFO: Pod "pod-707263b8-e64d-4e88-bedb-6bf59fac0689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006874496s STEP: Saw pod success Sep 3 13:56:50.348: INFO: Pod "pod-707263b8-e64d-4e88-bedb-6bf59fac0689" satisfied condition "Succeeded or Failed" Sep 3 13:56:50.351: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-7jvhm pod pod-707263b8-e64d-4e88-bedb-6bf59fac0689 container test-container: STEP: delete the pod Sep 3 13:56:50.364: INFO: Waiting for pod pod-707263b8-e64d-4e88-bedb-6bf59fac0689 to disappear Sep 3 13:56:50.367: INFO: Pod pod-707263b8-e64d-4e88-bedb-6bf59fac0689 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:50.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6465" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":686,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:35.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 3 13:56:35.899: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 3 13:56:37.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274195, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274195, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274196, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766274195, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 3 13:56:40.924: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:56:40.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5122-crds.webhook.example.com via the AdmissionRegistration API Sep 3 13:56:51.452: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:52.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-583" for this suite. STEP: Destroying namespace "webhook-583-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.997 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":28,"skipped":454,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:52.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:52.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3907" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":29,"skipped":459,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":473,"failed":0} [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:39.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-402 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-402 STEP: Deleting pre-stop pod Sep 3 13:56:55.007: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:55.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-402" for this suite. • [SLOW TEST:15.083 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":32,"skipped":473,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:36.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6462 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 3 13:56:36.659: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 3 13:56:36.680: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:56:38.683: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 3 13:56:40.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:56:42.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:56:44.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:56:46.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:56:48.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:56:50.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 3 13:56:52.684: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 3 13:56:52.690: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 3 13:56:54.711: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.196:8080/dial?request=hostname&protocol=udp&host=192.168.2.187&port=8081&tries=1'] Namespace:pod-network-test-6462 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:56:54.711: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:56:54.889: INFO: Waiting for responses: map[] Sep 3 13:56:54.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.196:8080/dial?request=hostname&protocol=udp&host=192.168.1.242&port=8081&tries=1'] Namespace:pod-network-test-6462 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 3 13:56:54.892: INFO: >>> kubeConfig: /root/.kube/config Sep 3 13:56:55.020: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:56:55.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6462" for this suite. • [SLOW TEST:18.403 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:55.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-8943 STEP: creating replication controller nodeport-test in namespace services-8943 I0903 13:56:55.151429 32 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8943, replica count: 2 I0903 13:56:58.202105 32 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:56:58.202: INFO: Creating new exec pod Sep 3 13:57:01.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodr86j6 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 3 13:57:01.485: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Sep 3 13:57:01.485: INFO: stdout: "" Sep 3 13:57:01.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodr86j6 -- /bin/sh -x -c nc -zv -t -w 2 10.138.205.186 80' Sep 3 13:57:01.737: INFO: stderr: "+ nc -zv -t -w 2 10.138.205.186 80\nConnection to 10.138.205.186 80 port [tcp/http] succeeded!\n" Sep 3 13:57:01.737: INFO: stdout: "" Sep 3 13:57:01.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodr86j6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 32602' Sep 3 13:57:01.986: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.9 32602\nConnection to 172.18.0.9 32602 port [tcp/32602] succeeded!\n" Sep 3 13:57:01.986: INFO: stdout: "" Sep 3 13:57:01.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodr86j6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32602' Sep 3 13:57:02.220: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 32602\nConnection to 172.18.0.10 32602 port [tcp/32602] succeeded!\n" Sep 3 13:57:02.220: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:02.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8943" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:7.125 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":37,"skipped":783,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:57:02.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 3 13:57:02.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4880 create -f -' Sep 3 13:57:02.637: INFO: stderr: "" Sep 3 13:57:02.637: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 3 13:57:02.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4880 diff -f -' Sep 3 13:57:03.077: INFO: rc: 1 Sep 3 13:57:03.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4880 delete -f -' Sep 3 13:57:03.203: INFO: stderr: "" Sep 3 13:57:03.204: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:03.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4880" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":38,"skipped":790,"failed":0} SSSSSSS ------------------------------ Sep 3 13:57:03.228: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:50.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:06.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1312" for this suite. • [SLOW TEST:16.073 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":18,"skipped":362,"failed":0} Sep 3 13:57:06.446: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:52.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:08.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6325" for this suite. • [SLOW TEST:16.110 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":30,"skipped":468,"failed":0} Sep 3 13:57:08.332: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:55.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-9947 STEP: Creating a pod to test atomic-volume-subpath Sep 3 13:56:55.095: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9947" in namespace "subpath-7631" to be "Succeeded or Failed" Sep 3 13:56:55.098: INFO: Pod "pod-subpath-test-secret-9947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.766776ms Sep 3 13:56:57.101: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 2.006361236s Sep 3 13:56:59.105: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 4.010391502s Sep 3 13:57:01.116: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 6.021334962s Sep 3 13:57:03.121: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 8.025495386s Sep 3 13:57:05.125: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 10.030403383s Sep 3 13:57:07.130: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 12.035063811s Sep 3 13:57:09.134: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 14.039170289s Sep 3 13:57:11.422: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 16.327342444s Sep 3 13:57:13.426: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 18.330792869s Sep 3 13:57:15.430: INFO: Pod "pod-subpath-test-secret-9947": Phase="Running", Reason="", readiness=true. Elapsed: 20.334986004s Sep 3 13:57:17.434: INFO: Pod "pod-subpath-test-secret-9947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.338846673s STEP: Saw pod success Sep 3 13:57:17.434: INFO: Pod "pod-subpath-test-secret-9947" satisfied condition "Succeeded or Failed" Sep 3 13:57:17.437: INFO: Trying to get logs from node capi-kali-md-0-76b6798f7f-5n8xl pod pod-subpath-test-secret-9947 container test-container-subpath-secret-9947: STEP: delete the pod Sep 3 13:57:17.450: INFO: Waiting for pod pod-subpath-test-secret-9947 to disappear Sep 3 13:57:17.453: INFO: Pod pod-subpath-test-secret-9947 no longer exists STEP: Deleting pod pod-subpath-test-secret-9947 Sep 3 13:57:17.453: INFO: Deleting pod "pod-subpath-test-secret-9947" in namespace "subpath-7631" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:17.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7631" for this suite. • [SLOW TEST:22.411 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":487,"failed":0} Sep 3 13:57:17.466: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:31.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4159 STEP: creating service affinity-nodeport-transition in namespace services-4159 STEP: creating replication controller affinity-nodeport-transition in namespace services-4159 I0903 13:56:31.052257 23 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4159, replica count: 3 I0903 13:56:34.102769 23 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0903 13:56:37.103135 23 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 3 13:56:37.115: INFO: Creating new exec pod Sep 3 13:56:42.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 3 13:56:42.387: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Sep 3 13:56:42.387: INFO: stdout: "" Sep 3 13:56:42.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c nc -zv -t -w 2 10.132.52.197 80' Sep 3 13:56:42.635: INFO: stderr: "+ nc -zv -t -w 2 10.132.52.197 80\nConnection to 10.132.52.197 80 port [tcp/http] succeeded!\n" Sep 3 13:56:42.635: INFO: stdout: "" Sep 3 13:56:42.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 32233' Sep 3 13:56:42.865: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.9 32233\nConnection to 172.18.0.9 32233 port [tcp/32233] succeeded!\n" Sep 3 13:56:42.865: INFO: stdout: "" Sep 3 13:56:42.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32233' Sep 3 13:56:43.111: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.10 32233\nConnection to 172.18.0.10 32233 port [tcp/32233] succeeded!\n" Sep 3 13:56:43.111: INFO: stdout: "" Sep 3 13:56:43.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.9:32233/ ; done' Sep 3 13:56:43.493: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n" Sep 3 13:56:43.493: INFO: stdout: "\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-wt4h6\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-wt4h6" Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.494: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.9:32233/ ; done' Sep 3 13:56:43.842: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n" Sep 3 13:56:43.842: INFO: stdout: "\naffinity-nodeport-transition-wt4h6\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-wt4h6\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-wt4h6\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-wt4h6\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-pqhpw\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-wt4h6\naffinity-nodeport-transition-wt4h6" Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-pqhpw Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:56:43.842: INFO: Received response from host: affinity-nodeport-transition-wt4h6 Sep 3 13:57:13.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4159 exec execpod-affinityzlwct -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.9:32233/ ; done' Sep 3 13:57:14.222: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.9:32233/\n" Sep 3 13:57:14.223: INFO: stdout: "\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2\naffinity-nodeport-transition-lmcv2" Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Received response from host: affinity-nodeport-transition-lmcv2 Sep 3 13:57:14.223: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4159, will wait for the garbage collector to delete the pods Sep 3 13:57:14.293: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.332228ms Sep 3 13:57:14.393: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.269361ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:23.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4159" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:52.712 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":301,"failed":0} Sep 3 13:57:23.718: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:55:46.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:57:46.817: INFO: Deleting pod "var-expansion-26002b38-63cf-43c4-aa2d-202552e0e6e8" in namespace "var-expansion-729" Sep 3 13:57:46.822: INFO: Wait up to 5m0s for pod "var-expansion-26002b38-63cf-43c4-aa2d-202552e0e6e8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:48.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-729" for this suite. • [SLOW TEST:122.067 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":-1,"completed":27,"skipped":407,"failed":0} Sep 3 13:57:48.842: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:50.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 13:56:50.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 3 13:56:51.003: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-09-03T13:56:51Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-09-03T13:56:51Z]] name:name1 resourceVersion:1059231 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:94f0db58-4722-4dc2-b712-b71b5d50f85d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 3 13:57:01.010: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-09-03T13:57:01Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-09-03T13:57:01Z]] name:name2 resourceVersion:1059515 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:03668e24-b6bd-4df7-a5a1-761e989952b5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 3 13:57:11.021: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-09-03T13:56:51Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-09-03T13:57:11Z]] name:name1 resourceVersion:1059643 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:94f0db58-4722-4dc2-b712-b71b5d50f85d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 3 13:57:21.030: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-09-03T13:57:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-09-03T13:57:21Z]] name:name2 resourceVersion:1059765 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:03668e24-b6bd-4df7-a5a1-761e989952b5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 3 13:57:31.221: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-09-03T13:56:51Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-09-03T13:57:11Z]] name:name1 resourceVersion:1059859 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:94f0db58-4722-4dc2-b712-b71b5d50f85d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 3 13:57:41.230: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-09-03T13:57:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-09-03T13:57:21Z]] name:name2 resourceVersion:1059977 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:03668e24-b6bd-4df7-a5a1-761e989952b5] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:57:51.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7088" for this suite. • [SLOW TEST:61.347 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:46.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9237 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9237 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9237 Sep 3 13:56:47.054: INFO: Found 0 stateful pods, waiting for 1 Sep 3 13:56:57.059: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 3 13:56:57.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:56:57.318: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:56:57.318: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:56:57.318: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:56:57.322: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 3 13:57:07.325: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:57:07.325: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:57:07.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998944s Sep 3 13:57:08.347: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995182506s Sep 3 13:57:09.351: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990182519s Sep 3 13:57:10.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986225788s Sep 3 13:57:11.423: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980518619s Sep 3 13:57:12.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.914171095s Sep 3 13:57:13.430: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.909740467s Sep 3 13:57:14.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.906360061s Sep 3 13:57:15.440: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.902040167s Sep 3 13:57:16.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 897.016114ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9237 Sep 3 13:57:17.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:57:17.697: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:57:17.698: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:57:17.698: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:57:17.703: INFO: Found 1 stateful pods, waiting for 3 Sep 3 13:57:27.708: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:57:27.709: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:57:27.709: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 3 13:57:27.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:57:28.542: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:57:28.542: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:57:28.542: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:57:28.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:57:29.428: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:57:29.428: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:57:29.428: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:57:29.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 3 13:57:30.424: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Sep 3 13:57:30.424: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 3 13:57:30.424: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 3 13:57:30.424: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:57:30.519: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Sep 3 13:57:40.528: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:57:40.529: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:57:40.529: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 3 13:57:40.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999736s Sep 3 13:57:41.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996178763s Sep 3 13:57:42.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992576836s Sep 3 13:57:43.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987122s Sep 3 13:57:44.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982757872s Sep 3 13:57:45.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977382383s Sep 3 13:57:46.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972567268s Sep 3 13:57:47.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966998934s Sep 3 13:57:48.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962451982s Sep 3 13:57:49.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 956.718958ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9237 Sep 3 13:57:50.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:57:50.874: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:57:50.874: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:57:50.875: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:57:50.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:57:51.132: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:57:51.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:57:51.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:57:51.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-9237 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 3 13:57:51.378: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Sep 3 13:57:51.378: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 3 13:57:51.378: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 3 13:57:51.378: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 13:58:21.395: INFO: Deleting all statefulset in ns statefulset-9237 Sep 3 13:58:21.398: INFO: Scaling statefulset ss to 0 Sep 3 13:58:21.413: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:58:21.416: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:58:21.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9237" for this suite. • [SLOW TEST:94.504 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":21,"skipped":386,"failed":0} Sep 3 13:58:21.444: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:40.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1699 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 3 13:56:40.774: INFO: Found 0 stateful pods, waiting for 3 Sep 3 13:56:50.779: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:56:50.779: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:56:50.779: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Sep 3 13:57:00.779: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:57:00.779: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:57:00.779: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 3 13:57:00.806: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 3 13:57:10.841: INFO: Updating stateful set ss2 Sep 3 13:57:11.025: INFO: Waiting for Pod statefulset-1699/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:57:21.033: INFO: Waiting for Pod statefulset-1699/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 3 13:57:31.428: INFO: Found 2 stateful pods, waiting for 3 Sep 3 13:57:41.433: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:57:41.433: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 3 13:57:41.433: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 3 13:57:41.459: INFO: Updating stateful set ss2 Sep 3 13:57:41.466: INFO: Waiting for Pod statefulset-1699/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:57:51.472: INFO: Waiting for Pod statefulset-1699/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:58:01.492: INFO: Updating stateful set ss2 Sep 3 13:58:01.499: INFO: Waiting for StatefulSet statefulset-1699/ss2 to complete update Sep 3 13:58:01.499: INFO: Waiting for Pod statefulset-1699/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 3 13:58:11.507: INFO: Waiting for StatefulSet statefulset-1699/ss2 to complete update Sep 3 13:58:11.507: INFO: Waiting for Pod statefulset-1699/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 3 13:58:21.507: INFO: Deleting all statefulset in ns statefulset-1699 Sep 3 13:58:21.510: INFO: Scaling statefulset ss2 to 0 Sep 3 13:59:01.526: INFO: Waiting for statefulset status.replicas updated to 0 Sep 3 13:59:01.529: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 13:59:01.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1699" for this suite. • [SLOW TEST:141.001 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":28,"skipped":491,"failed":0} Sep 3 13:59:01.737: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 13:56:05.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-8d527dd0-7903-406e-98c3-89938eddc8a0 in namespace container-probe-5229 Sep 3 13:56:07.099: INFO: Started pod liveness-8d527dd0-7903-406e-98c3-89938eddc8a0 in namespace container-probe-5229 STEP: checking the pod's current state and verifying that restartCount is present Sep 3 13:56:07.102: INFO: Initial restart count of pod liveness-8d527dd0-7903-406e-98c3-89938eddc8a0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:00:08.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5229" for this suite. • [SLOW TEST:243.334 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":798,"failed":0} Sep 3 14:00:08.398: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":38,"skipped":701,"failed":0} Sep 3 13:57:51.752: INFO: Running AfterSuite actions on all nodes Sep 3 14:00:08.429: INFO: Running AfterSuite actions on node 1 Sep 3 14:00:08.429: INFO: Skipping dumping logs from cluster Ran 286 of 5484 Specs in 641.040 seconds SUCCESS! -- 286 Passed | 0 Failed | 0 Pending | 5198 Skipped Ginkgo ran 1 suite in 10m42.700569924s Test Suite Passed