I0202 23:37:38.618558 16 e2e.go:116] Starting e2e run "9506c34b-b8cd-45d0-8e2f-1ca388f853ed" on Ginkgo node 1 Feb 2 23:37:38.631: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1675381058 - will randomize all specs Will run 1 of 7066 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:76 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} Feb 2 23:37:38.766: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:37:38.768: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 23:37:38.798: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 23:37:38.831: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 23:37:38.831: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 23:37:38.831: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 23:37:38.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Feb 2 23:37:38.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 2 23:37:38.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 23:37:38.838: INFO: e2e test version: v1.25.6 Feb 2 23:37:38.839: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 2 23:37:38.839: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:37:38.844: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.079 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:76 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 2 23:37:38.766: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:37:38.768: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 23:37:38.798: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 23:37:38.831: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 23:37:38.831: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 23:37:38.831: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 23:37:38.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Feb 2 23:37:38.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 2 23:37:38.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 23:37:38.838: INFO: e2e test version: v1.25.6 Feb 2 23:37:38.839: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 2 23:37:38.839: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:37:38.844: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] test/e2e/kubectl/kubectl.go:392 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:37:39.136 Feb 2 23:37:39.136: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename kubectl 02/02/23 23:37:39.138 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:37:39.148 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:37:39.152 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] test/e2e/kubectl/kubectl.go:392 STEP: creating all guestbook components 02/02/23 23:37:39.156 Feb 2 23:37:39.156: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Feb 2 23:37:39.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:39.519: INFO: stderr: "" Feb 2 23:37:39.519: INFO: stdout: "service/agnhost-replica created\n" Feb 2 23:37:39.519: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Feb 2 23:37:39.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:39.878: INFO: stderr: "" Feb 2 23:37:39.878: INFO: stdout: "service/agnhost-primary created\n" Feb 2 23:37:39.878: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 2 23:37:39.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:40.262: INFO: stderr: "" Feb 2 23:37:40.262: INFO: stdout: "service/frontend created\n" Feb 2 23:37:40.262: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.40 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 2 23:37:40.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:40.627: INFO: stderr: "" Feb 2 23:37:40.627: INFO: stdout: "deployment.apps/frontend created\n" Feb 2 23:37:40.627: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.40 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 2 23:37:40.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:41.008: INFO: stderr: "" Feb 2 23:37:41.008: INFO: stdout: "deployment.apps/agnhost-primary created\n" Feb 2 23:37:41.008: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.40 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 2 23:37:41.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:41.278: INFO: stderr: "" Feb 2 23:37:41.279: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app 02/02/23 23:37:41.279 Feb 2 23:37:41.279: INFO: Waiting for all frontend pods to be Running. Feb 2 23:37:46.331: INFO: Waiting for frontend to serve content. Feb 2 23:37:46.340: INFO: Trying to add a new entry to the guestbook. Feb 2 23:37:46.350: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources 02/02/23 23:37:46.356 Feb 2 23:37:46.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.476: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.476: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.476 Feb 2 23:37:46.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.588: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.588: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.588 Feb 2 23:37:46.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.698: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.699: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.699 Feb 2 23:37:46.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.814: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.814: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.814 Feb 2 23:37:46.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.929: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.929: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.929 Feb 2 23:37:46.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:47.033: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:47.033: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 Feb 2 23:37:47.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8985" for this suite. 02/02/23 23:37:47.036 {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","completed":1,"skipped":4899,"failed":0} ------------------------------ • [SLOW TEST] [7.903 seconds] [sig-cli] Kubectl client test/e2e/kubectl/framework.go:23 Guestbook application test/e2e/kubectl/kubectl.go:367 should create and stop a working application [Conformance] test/e2e/kubectl/kubectl.go:392 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:37:39.136 Feb 2 23:37:39.136: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename kubectl 02/02/23 23:37:39.138 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:37:39.148 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:37:39.152 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] test/e2e/kubectl/kubectl.go:392 STEP: creating all guestbook components 02/02/23 23:37:39.156 Feb 2 23:37:39.156: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Feb 2 23:37:39.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:39.519: INFO: stderr: "" Feb 2 23:37:39.519: INFO: stdout: "service/agnhost-replica created\n" Feb 2 23:37:39.519: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Feb 2 23:37:39.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:39.878: INFO: stderr: "" Feb 2 23:37:39.878: INFO: stdout: "service/agnhost-primary created\n" Feb 2 23:37:39.878: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 2 23:37:39.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:40.262: INFO: stderr: "" Feb 2 23:37:40.262: INFO: stdout: "service/frontend created\n" Feb 2 23:37:40.262: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.40 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 2 23:37:40.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:40.627: INFO: stderr: "" Feb 2 23:37:40.627: INFO: stdout: "deployment.apps/frontend created\n" Feb 2 23:37:40.627: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.40 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 2 23:37:40.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:41.008: INFO: stderr: "" Feb 2 23:37:41.008: INFO: stdout: "deployment.apps/agnhost-primary created\n" Feb 2 23:37:41.008: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.40 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 2 23:37:41.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 create -f -' Feb 2 23:37:41.278: INFO: stderr: "" Feb 2 23:37:41.279: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app 02/02/23 23:37:41.279 Feb 2 23:37:41.279: INFO: Waiting for all frontend pods to be Running. Feb 2 23:37:46.331: INFO: Waiting for frontend to serve content. Feb 2 23:37:46.340: INFO: Trying to add a new entry to the guestbook. Feb 2 23:37:46.350: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources 02/02/23 23:37:46.356 Feb 2 23:37:46.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.476: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.476: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.476 Feb 2 23:37:46.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.588: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.588: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.588 Feb 2 23:37:46.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.698: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.699: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.699 Feb 2 23:37:46.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.814: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.814: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.814 Feb 2 23:37:46.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:46.929: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:46.929: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources 02/02/23 23:37:46.929 Feb 2 23:37:46.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:39779 --kubeconfig=/home/xtesting/.kube/config --namespace=kubectl-8985 delete --grace-period=0 --force -f -' Feb 2 23:37:47.033: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:37:47.033: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 Feb 2 23:37:47.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8985" for this suite. 02/02/23 23:37:47.036 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:87 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 {"msg":"Test Suite completed","completed":1,"skipped":7065,"failed":0} Feb 2 23:37:47.132: INFO: Running AfterSuite actions on all nodes Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 2 23:37:47.132: INFO: Running AfterSuite actions on node 1 Feb 2 23:37:47.132: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:87 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 2 23:37:47.132: INFO: Running AfterSuite actions on all nodes Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Feb 2 23:37:47.132: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 2 23:37:47.132: INFO: Running AfterSuite actions on node 1 Feb 2 23:37:47.132: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 ------------------------------ [ReportAfterSuite] PASSED [0.098 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 << End Captured GinkgoWriter Output ------------------------------ Ran 1 of 7066 Specs in 8.367 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 7065 Skipped PASS Ginkgo ran 1 suite in 8.733835079s Test Suite Passed