I0811 18:20:25.854678 7 e2e.go:243] Starting e2e run "6dbabb6f-f258-44a5-ade8-51175d50aa03" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597170015 - Will randomize all specs Will run 1 of 4413 specs Aug 11 18:20:26.547: INFO: >>> kubeConfig: /root/.kube/config Aug 11 18:20:26.609: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 11 18:20:26.830: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 11 18:20:28.183: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Aug 11 18:20:28.183: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 11 18:20:28.183: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 11 18:20:28.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 11 18:20:28.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 11 18:20:28.485: INFO: e2e test version: v1.15.12 Aug 11 18:20:28.490: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 18:20:28.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Aug 11 18:20:31.624: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Aug 11 18:20:31.630: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Aug 11 18:20:31.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3175' Aug 11 18:21:47.987: INFO: stderr: "" Aug 11 18:21:47.988: INFO: stdout: "service/redis-slave created\n" Aug 11 18:21:47.989: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Aug 11 18:21:47.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3175' Aug 11 18:21:56.967: INFO: stderr: "" Aug 11 18:21:56.967: INFO: stdout: "service/redis-master created\n" Aug 11 18:21:56.968: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 11 18:21:56.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3175' Aug 11 18:22:05.256: INFO: stderr: "" Aug 11 18:22:05.256: INFO: stdout: "service/frontend created\n" Aug 11 18:22:05.258: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Aug 11 18:22:05.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3175' Aug 11 18:22:10.578: INFO: stderr: "" Aug 11 18:22:10.578: INFO: stdout: "deployment.apps/frontend created\n" Aug 11 18:22:10.580: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 11 18:22:10.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3175' Aug 11 18:22:17.009: INFO: stderr: "" Aug 11 18:22:17.009: INFO: stdout: "deployment.apps/redis-master created\n" Aug 11 18:22:17.011: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Aug 11 18:22:17.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3175' Aug 11 18:22:25.077: INFO: stderr: "" Aug 11 18:22:25.077: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Aug 11 18:22:25.078: INFO: Waiting for all frontend pods to be Running. Aug 11 18:32:25.170: INFO: Unexpected error occurred: Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running STEP: using delete to clean up resources Aug 11 18:32:25.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3175' Aug 11 18:33:44.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 18:33:44.231: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 11 18:33:44.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3175' Aug 11 18:33:49.967: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 18:33:49.968: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 11 18:33:49.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3175' Aug 11 18:34:00.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 18:34:00.163: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 11 18:34:00.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3175' Aug 11 18:34:01.876: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 18:34:01.877: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 11 18:34:01.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3175' Aug 11 18:34:04.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 18:34:04.925: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 11 18:34:04.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3175' Aug 11 18:34:10.849: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 11 18:34:10.849: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "kubectl-3175". STEP: Found 39 events. Aug 11 18:34:12.359: INFO: At 2020-08-11 18:22:11 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6d89d458dc to 3 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:13 +0000 UTC - event for frontend-6d89d458dc: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6d89d458dc-xhn6x Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:15 +0000 UTC - event for frontend-6d89d458dc: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6d89d458dc-s9smb Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:15 +0000 UTC - event for frontend-6d89d458dc: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6d89d458dc-t48q6 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:15 +0000 UTC - event for frontend-6d89d458dc-xhn6x: {default-scheduler } Scheduled: Successfully assigned kubectl-3175/frontend-6d89d458dc-xhn6x to iruya-worker Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:16 +0000 UTC - event for frontend-6d89d458dc-s9smb: {default-scheduler } Scheduled: Successfully assigned kubectl-3175/frontend-6d89d458dc-s9smb to iruya-worker2 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:16 +0000 UTC - event for frontend-6d89d458dc-t48q6: {default-scheduler } Scheduled: Successfully assigned kubectl-3175/frontend-6d89d458dc-t48q6 to iruya-worker Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:18 +0000 UTC - event for redis-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set redis-master-6dc85bfbcd to 1 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:21 +0000 UTC - event for redis-master-6dc85bfbcd: {replicaset-controller } SuccessfulCreate: Created pod: redis-master-6dc85bfbcd-btsk4 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:22 +0000 UTC - event for redis-master-6dc85bfbcd-btsk4: {default-scheduler } Scheduled: Successfully assigned kubectl-3175/redis-master-6dc85bfbcd-btsk4 to iruya-worker2 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:26 +0000 UTC - event for redis-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set redis-slave-757665758c to 2 Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:26 +0000 UTC - event for redis-slave-757665758c: {replicaset-controller } SuccessfulCreate: Created pod: redis-slave-757665758c-gkd5w Aug 11 18:34:12.361: INFO: At 2020-08-11 18:22:27 +0000 UTC - event for redis-slave-757665758c: {replicaset-controller } SuccessfulCreate: Created pod: redis-slave-757665758c-jzphb Aug 11 18:34:12.362: INFO: At 2020-08-11 18:22:27 +0000 UTC - event for redis-slave-757665758c-gkd5w: {default-scheduler } Scheduled: Successfully assigned kubectl-3175/redis-slave-757665758c-gkd5w to iruya-worker Aug 11 18:34:12.362: INFO: At 2020-08-11 18:22:27 +0000 UTC - event for redis-slave-757665758c-jzphb: {default-scheduler } Scheduled: Successfully assigned kubectl-3175/redis-slave-757665758c-jzphb to iruya-worker2 Aug 11 18:34:12.362: INFO: At 2020-08-11 18:26:18 +0000 UTC - event for frontend-6d89d458dc-s9smb: {kubelet iruya-worker2} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:26:18 +0000 UTC - event for frontend-6d89d458dc-t48q6: {kubelet iruya-worker} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:26:19 +0000 UTC - event for frontend-6d89d458dc-xhn6x: {kubelet iruya-worker} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:26:23 +0000 UTC - event for redis-master-6dc85bfbcd-btsk4: {kubelet iruya-worker2} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:26:27 +0000 UTC - event for redis-slave-757665758c-gkd5w: {kubelet iruya-worker} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:26:29 +0000 UTC - event for redis-slave-757665758c-jzphb: {kubelet iruya-worker2} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:31:12 +0000 UTC - event for frontend-6d89d458dc-t48q6: {kubelet iruya-worker} Pulled: Container image "gcr.io/google-samples/gb-frontend:v6" already present on machine Aug 11 18:34:12.362: INFO: At 2020-08-11 18:31:12 +0000 UTC - event for frontend-6d89d458dc-xhn6x: {kubelet iruya-worker} Pulled: Container image "gcr.io/google-samples/gb-frontend:v6" already present on machine Aug 11 18:34:12.362: INFO: At 2020-08-11 18:31:12 +0000 UTC - event for redis-slave-757665758c-gkd5w: {kubelet iruya-worker} Pulled: Container image "gcr.io/google-samples/gb-redisslave:v3" already present on machine Aug 11 18:34:12.362: INFO: At 2020-08-11 18:31:13 +0000 UTC - event for frontend-6d89d458dc-s9smb: {kubelet iruya-worker2} Pulled: Container image "gcr.io/google-samples/gb-frontend:v6" already present on machine Aug 11 18:34:12.362: INFO: At 2020-08-11 18:31:13 +0000 UTC - event for redis-master-6dc85bfbcd-btsk4: {kubelet iruya-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/redis:1.0" already present on machine Aug 11 18:34:12.362: INFO: At 2020-08-11 18:31:14 +0000 UTC - event for redis-slave-757665758c-jzphb: {kubelet iruya-worker2} Pulled: Container image "gcr.io/google-samples/gb-redisslave:v3" already present on machine Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:12 +0000 UTC - event for frontend-6d89d458dc-t48q6: {kubelet iruya-worker} Failed: Error: context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:12 +0000 UTC - event for frontend-6d89d458dc-xhn6x: {kubelet iruya-worker} Failed: Error: context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:12 +0000 UTC - event for redis-slave-757665758c-gkd5w: {kubelet iruya-worker} Failed: Error: context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for frontend-6d89d458dc-s9smb: {kubelet iruya-worker2} Failed: Error: context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for frontend-6d89d458dc-s9smb: {kubelet iruya-worker2} Failed: Error: failed to reserve container name "php-redis_frontend-6d89d458dc-s9smb_kubectl-3175_5924c3a9-91a8-4bc7-a7b2-fd9eb6a369d4_0": name "php-redis_frontend-6d89d458dc-s9smb_kubectl-3175_5924c3a9-91a8-4bc7-a7b2-fd9eb6a369d4_0" is reserved for "61d2acc1e073c69b6d8fff14a5620ba0ce9492aa2b81d394ef3cbf262415f98c" Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for frontend-6d89d458dc-t48q6: {kubelet iruya-worker} Failed: Error: failed to reserve container name "php-redis_frontend-6d89d458dc-t48q6_kubectl-3175_f2368f26-9b4d-4fc2-9fc2-217555cb7b9b_0": name "php-redis_frontend-6d89d458dc-t48q6_kubectl-3175_f2368f26-9b4d-4fc2-9fc2-217555cb7b9b_0" is reserved for "f448dcd5c82cb53b59b9228d35e682439c8d163fe423d847480614eb150dac2e" Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for frontend-6d89d458dc-xhn6x: {kubelet iruya-worker} Failed: Error: failed to reserve container name "php-redis_frontend-6d89d458dc-xhn6x_kubectl-3175_51f16d3b-48d5-434f-b4c6-f2ca5a434dc4_0": name "php-redis_frontend-6d89d458dc-xhn6x_kubectl-3175_51f16d3b-48d5-434f-b4c6-f2ca5a434dc4_0" is reserved for "c601ee7c2f30e93372b1de1252df37a26d473d3dcd7ce1a4c2e69fe7fae62f8d" Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for redis-master-6dc85bfbcd-btsk4: {kubelet iruya-worker2} Failed: Error: context deadline exceeded Aug 11 18:34:12.362: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for redis-master-6dc85bfbcd-btsk4: {kubelet iruya-worker2} Failed: Error: failed to reserve container name "master_redis-master-6dc85bfbcd-btsk4_kubectl-3175_99f83f7e-c601-4cd4-ab3d-7ff217ea937e_0": name "master_redis-master-6dc85bfbcd-btsk4_kubectl-3175_99f83f7e-c601-4cd4-ab3d-7ff217ea937e_0" is reserved for "bc68398aedc7586e218151354868633e9e36f998687883d12803a43b01308d9b" Aug 11 18:34:12.363: INFO: At 2020-08-11 18:33:13 +0000 UTC - event for redis-slave-757665758c-gkd5w: {kubelet iruya-worker} Failed: Error: failed to reserve container name "slave_redis-slave-757665758c-gkd5w_kubectl-3175_73ee48f5-133c-4c2c-b5ab-f28cc36fa9f0_0": name "slave_redis-slave-757665758c-gkd5w_kubectl-3175_73ee48f5-133c-4c2c-b5ab-f28cc36fa9f0_0" is reserved for "ec27f19a0edc35fa883b993a98db311d57fe5110fab8486ff40756c0cc0b606e" Aug 11 18:34:12.363: INFO: At 2020-08-11 18:33:14 +0000 UTC - event for redis-slave-757665758c-jzphb: {kubelet iruya-worker2} Failed: Error: context deadline exceeded Aug 11 18:34:12.363: INFO: At 2020-08-11 18:33:15 +0000 UTC - event for redis-slave-757665758c-jzphb: {kubelet iruya-worker2} Failed: Error: failed to reserve container name "slave_redis-slave-757665758c-jzphb_kubectl-3175_130b400c-87f2-4625-92cc-9839e61563ec_0": name "slave_redis-slave-757665758c-jzphb_kubectl-3175_130b400c-87f2-4625-92cc-9839e61563ec_0" is reserved for "b88b0314ec62db3732dfb93fc5f76043d5156c3b2633bbf09ee2b1d20d065a9b" Aug 11 18:34:13.584: INFO: POD NODE PHASE GRACE CONDITIONS Aug 11 18:34:13.585: INFO: frontend-6d89d458dc-xhn6x iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:17 +0000 UTC ContainersNotReady containers with unready status: [php-redis]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:17 +0000 UTC ContainersNotReady containers with unready status: [php-redis]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:13 +0000 UTC }] Aug 11 18:34:13.586: INFO: redis-master-6dc85bfbcd-btsk4 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:23 +0000 UTC ContainersNotReady containers with unready status: [master]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:23 +0000 UTC ContainersNotReady containers with unready status: [master]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:21 +0000 UTC }] Aug 11 18:34:13.586: INFO: redis-slave-757665758c-gkd5w iruya-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [slave]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:27 +0000 UTC ContainersNotReady containers with unready status: [slave]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:26 +0000 UTC }] Aug 11 18:34:13.587: INFO: redis-slave-757665758c-jzphb iruya-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:29 +0000 UTC ContainersNotReady containers with unready status: [slave]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:29 +0000 UTC ContainersNotReady containers with unready status: [slave]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 18:22:27 +0000 UTC }] Aug 11 18:34:13.587: INFO: Aug 11 18:34:16.183: INFO: Logging node info for node iruya-control-plane Aug 11 18:34:17.484: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-control-plane,UID:f4f0b8e7-6069-499e-8bff-b29d70e71db7,ResourceVersion:4272621,Generation:0,CreationTimestamp:2020-07-19 21:15:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-control-plane,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-08-11 18:33:37 +0000 UTC 2020-07-19 21:15:33 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-08-11 18:33:37 +0000 UTC 2020-07-19 21:15:33 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-08-11 18:33:37 +0000 UTC 2020-07-19 21:15:33 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-08-11 18:33:37 +0000 UTC 2020-07-19 21:16:03 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.18.0.9} {Hostname iruya-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ca83ac9a93d54502bb9afb972c3f1f0b,SystemUUID:1d4ac873-683f-4805-8579-15bbb4e4df77,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.15.12,KubeProxyVersion:v1.15.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.12] 249278220} {[k8s.gcr.io/kube-controller-manager:v1.15.12] 200111388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.12] 97386120} {[k8s.gcr.io/kube-scheduler:v1.15.12] 96590091} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/rancher/local-path-provisioner:v0.0.12] 41994847} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Aug 11 18:34:17.497: INFO: Logging kubelet events for node iruya-control-plane Aug 11 18:34:18.586: INFO: Logging pods the kubelet thinks is on node iruya-control-plane Aug 11 18:34:19.327: INFO: kube-scheduler-iruya-control-plane started at 2020-07-19 21:15:13 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.327: INFO: Container kube-scheduler ready: true, restart count 0 Aug 11 18:34:19.327: INFO: kube-proxy-nwhvb started at 2020-07-19 21:15:53 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container kube-proxy ready: true, restart count 0 Aug 11 18:34:19.328: INFO: coredns-5d4dd4b4db-w42x4 started at 2020-07-19 21:16:07 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container coredns ready: true, restart count 0 Aug 11 18:34:19.328: INFO: coredns-5d4dd4b4db-clz9n started at 2020-07-19 21:16:07 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container coredns ready: true, restart count 0 Aug 11 18:34:19.328: INFO: etcd-iruya-control-plane started at 2020-07-19 21:15:13 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container etcd ready: true, restart count 0 Aug 11 18:34:19.328: INFO: kube-apiserver-iruya-control-plane started at 2020-07-19 21:15:13 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container kube-apiserver ready: true, restart count 0 Aug 11 18:34:19.328: INFO: kube-controller-manager-iruya-control-plane started at 2020-07-19 21:15:13 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container kube-controller-manager ready: true, restart count 0 Aug 11 18:34:19.328: INFO: kindnet-xbjsm started at 2020-07-19 21:15:53 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 18:34:19.328: INFO: local-path-provisioner-668779bd7-sf66r started at 2020-07-19 21:16:03 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:19.328: INFO: Container local-path-provisioner ready: true, restart count 0 W0811 18:34:20.183507 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 11 18:34:21.009: INFO: Latency metrics for node iruya-control-plane Aug 11 18:34:21.009: INFO: Logging node info for node iruya-worker Aug 11 18:34:21.090: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker,UID:c1fda9d3-dec3-4ae2-abb7-262d0e3c3bf8,ResourceVersion:4272662,Generation:0,CreationTimestamp:2020-07-19 21:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-08-11 18:33:52 +0000 UTC 2020-07-19 21:16:07 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-08-11 18:33:52 +0000 UTC 2020-07-19 21:16:07 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-08-11 18:33:52 +0000 UTC 2020-07-19 21:16:07 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-08-11 18:33:52 +0000 UTC 2020-07-19 21:16:37 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.18.0.5} {Hostname iruya-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4b78e6a1abc44f7495c448bdd4d05c19,SystemUUID:ee65a740-deb7-49d1-9d0a-395ba195a769,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.15.12,KubeProxyVersion:v1.15.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/ollivier/clearwater-cassandra@sha256:24f49e5e936930e808cd79cac72fd4f2dc87e97b33a9dedecf60d0eb1f655015 docker.io/ollivier/clearwater-cassandra:latest] 386316854} {[docker.io/ollivier/clearwater-homestead-prov@sha256:21ea1bbce8747d80fc46c07ee0bdb94653036ee544413853074f39900798a7d8 docker.io/ollivier/clearwater-homestead-prov:latest] 360555271} {[docker.io/ollivier/clearwater-ellis@sha256:ba47a8963e0683886890de11cf65942f3460ec4e2ad313f1e0fe0d144b12969b docker.io/ollivier/clearwater-ellis:latest] 351389939} {[docker.io/ollivier/clearwater-homer@sha256:69b5406c3dcf61c95a067571c873b8691dc7cb23b24dbe3749b0a1d2b7c08ca9 docker.io/ollivier/clearwater-homer:latest] 344133365} {[docker.io/ollivier/clearwater-astaire@sha256:24e9186a8be32af9559f4d198c5c423eaac0d6c7b827c5ab674f2d124385c2fb docker.io/ollivier/clearwater-astaire:latest] 327029020} {[docker.io/ollivier/clearwater-bono@sha256:25b1c4759aa4dd92b752451e64f9df5f4a6336d74a15dd5914fbb83ab81ab9f4 docker.io/ollivier/clearwater-bono:latest] 303484988} {[docker.io/ollivier/clearwater-sprout@sha256:5a833832419bcf25ea1044768038c885ed4bad73225d5d07fc54eebc2a56662b docker.io/ollivier/clearwater-sprout:latest] 298458075} {[docker.io/ollivier/clearwater-homestead@sha256:6eebbdbc9e424dd87b3d149b9fa1c779ad5c402e2f7ef414ec585a43ebb782d6 docker.io/ollivier/clearwater-homestead:latest] 294998669} {[docker.io/ollivier/clearwater-ralf@sha256:becf37bf5c8d9f81189d9d727c3c3ab7e032b7de3a710f7bbb264d35d442a344 docker.io/ollivier/clearwater-ralf:latest] 287275238} {[docker.io/ollivier/clearwater-chronos@sha256:83359fb6320eefc0adbf56fcd4eb7a19be2c53dadaa4944a20510cc761536222 docker.io/ollivier/clearwater-chronos:latest] 285335126} {[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.12] 249278220} {[k8s.gcr.io/kube-controller-manager:v1.15.12] 200111388} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/aquasec/kube-hunter@sha256:fee4656ab3b4db6aba14143a8a8e1aa77ac743e3574e7f9ca126a96887505ccc docker.io/aquasec/kube-hunter:latest] 127871601} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.12] 97386120} {[k8s.gcr.io/kube-scheduler:v1.15.12] 96590091} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/rancher/local-path-provisioner:v0.0.12] 41994847} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/ollivier/clearwater-live-test@sha256:7d2da02dca6f486c0f48830ae9d064712a7429523a749953e9fb516ec77637c4 docker.io/ollivier/clearwater-live-test:latest] 39175389} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[docker.io/aquasec/kube-bench@sha256:2bd792eae0d13222bbf7a3641328b2a8cbe80f39c04575d06754e63da6e46cc7] 8042967} {[docker.io/aquasec/kube-bench@sha256:7a12e0a4544bc87e0e9ce182a2cb6e23218f2df61723f38775a0c9cd454f3579 docker.io/aquasec/kube-bench:latest] 8029635} {[docker.io/aquasec/kube-bench@sha256:3e0a46bdd1bc66379e3803bf5d632a840a7a29b3124ffdda8d202f1751edce24] 8029253} {[docker.io/aquasec/kube-bench@sha256:a020955b7fba7b00b4bb6bead0e092adaf13625b660bcb1d230b6c3adb5271f4] 8028946} {[docker.io/aquasec/kube-bench@sha256:48efacc96c6ffc519bc9d0558719896a556bf5255c79651e97bca9094ce9bf14] 8028937} {[docker.io/aquasec/kube-bench@sha256:41c9f89fd7da8236f5ea798dd4ac1e6fbb802a9fb6512050d8cdb4d647ae0329] 8028387} {[docker.io/aquasec/kube-bench@sha256:c21b290f8708caa1754d54f9903762bdc7aa4609f2292b7d376f9cd0ebbc800e] 8028386} {[quay.io/coreos/etcd:v2.2.5] 7670543} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 4381769} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest] 2779755} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest] 767890}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Aug 11 18:34:21.093: INFO: Logging kubelet events for node iruya-worker Aug 11 18:34:21.429: INFO: Logging pods the kubelet thinks is on node iruya-worker Aug 11 18:34:21.439: INFO: kube-proxy-jzrnl started at 2020-07-19 21:16:08 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:21.439: INFO: Container kube-proxy ready: true, restart count 0 Aug 11 18:34:21.439: INFO: kindnet-k7tjm started at 2020-07-19 21:16:08 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:21.439: INFO: Container kindnet-cni ready: true, restart count 0 W0811 18:34:21.444884 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 11 18:34:21.564: INFO: Latency metrics for node iruya-worker Aug 11 18:34:21.564: INFO: Logging node info for node iruya-worker2 Aug 11 18:34:21.571: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker2,UID:d9b9ff8d-1192-4ab4-b4bd-b82eb7ca9a0d,ResourceVersion:4272614,Generation:0,CreationTimestamp:2020-07-19 21:16:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker2,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-08-11 18:33:34 +0000 UTC 2020-07-19 21:16:09 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-08-11 18:33:34 +0000 UTC 2020-07-19 21:16:09 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-08-11 18:33:34 +0000 UTC 2020-07-19 21:16:09 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-08-11 18:33:34 +0000 UTC 2020-07-19 21:16:39 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.18.0.7} {Hostname iruya-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b694dd7803b94b1e91a98e9c9f12d9bc,SystemUUID:2895fffa-8ddd-4241-b600-4baede70dc19,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.15.12,KubeProxyVersion:v1.15.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/ollivier/clearwater-cassandra@sha256:24f49e5e936930e808cd79cac72fd4f2dc87e97b33a9dedecf60d0eb1f655015 docker.io/ollivier/clearwater-cassandra:latest] 386316854} {[docker.io/ollivier/clearwater-homestead-prov@sha256:21ea1bbce8747d80fc46c07ee0bdb94653036ee544413853074f39900798a7d8 docker.io/ollivier/clearwater-homestead-prov:latest] 360555271} {[docker.io/ollivier/clearwater-homer@sha256:69b5406c3dcf61c95a067571c873b8691dc7cb23b24dbe3749b0a1d2b7c08ca9 docker.io/ollivier/clearwater-homer:latest] 344133365} {[docker.io/ollivier/clearwater-astaire@sha256:24e9186a8be32af9559f4d198c5c423eaac0d6c7b827c5ab674f2d124385c2fb docker.io/ollivier/clearwater-astaire:latest] 327029020} {[docker.io/ollivier/clearwater-bono@sha256:25b1c4759aa4dd92b752451e64f9df5f4a6336d74a15dd5914fbb83ab81ab9f4 docker.io/ollivier/clearwater-bono:latest] 303484988} {[docker.io/ollivier/clearwater-sprout@sha256:5a833832419bcf25ea1044768038c885ed4bad73225d5d07fc54eebc2a56662b docker.io/ollivier/clearwater-sprout:latest] 298458075} {[docker.io/ollivier/clearwater-homestead@sha256:6eebbdbc9e424dd87b3d149b9fa1c779ad5c402e2f7ef414ec585a43ebb782d6 docker.io/ollivier/clearwater-homestead:latest] 294998669} {[docker.io/ollivier/clearwater-ralf@sha256:becf37bf5c8d9f81189d9d727c3c3ab7e032b7de3a710f7bbb264d35d442a344 docker.io/ollivier/clearwater-ralf:latest] 287275238} {[docker.io/ollivier/clearwater-chronos@sha256:83359fb6320eefc0adbf56fcd4eb7a19be2c53dadaa4944a20510cc761536222 docker.io/ollivier/clearwater-chronos:latest] 285335126} {[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.12] 249278220} {[k8s.gcr.io/kube-controller-manager:v1.15.12] 200111388} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/aquasec/kube-hunter@sha256:fee4656ab3b4db6aba14143a8a8e1aa77ac743e3574e7f9ca126a96887505ccc docker.io/aquasec/kube-hunter:latest] 127871601} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.12] 97386120} {[k8s.gcr.io/kube-scheduler:v1.15.12] 96590091} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/rancher/local-path-provisioner:v0.0.12] 41994847} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/ollivier/clearwater-live-test@sha256:7d2da02dca6f486c0f48830ae9d064712a7429523a749953e9fb516ec77637c4 docker.io/ollivier/clearwater-live-test:latest] 39175389} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[docker.io/aquasec/kube-bench@sha256:2bd792eae0d13222bbf7a3641328b2a8cbe80f39c04575d06754e63da6e46cc7] 8042967} {[docker.io/aquasec/kube-bench@sha256:7a12e0a4544bc87e0e9ce182a2cb6e23218f2df61723f38775a0c9cd454f3579 docker.io/aquasec/kube-bench:latest] 8029635} {[docker.io/aquasec/kube-bench@sha256:3e0a46bdd1bc66379e3803bf5d632a840a7a29b3124ffdda8d202f1751edce24] 8029253} {[docker.io/aquasec/kube-bench@sha256:41c9f89fd7da8236f5ea798dd4ac1e6fbb802a9fb6512050d8cdb4d647ae0329] 8028387} {[docker.io/aquasec/kube-bench@sha256:c21b290f8708caa1754d54f9903762bdc7aa4609f2292b7d376f9cd0ebbc800e] 8028386} {[quay.io/coreos/etcd:v2.2.5] 7670543} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 4381769} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest] 2779755} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest] 767890} {[docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793] 767885} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Aug 11 18:34:21.573: INFO: Logging kubelet events for node iruya-worker2 Aug 11 18:34:21.579: INFO: Logging pods the kubelet thinks is on node iruya-worker2 Aug 11 18:34:21.590: INFO: kindnet-8kg9z started at 2020-07-19 21:16:09 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:21.590: INFO: Container kindnet-cni ready: true, restart count 0 Aug 11 18:34:21.590: INFO: kube-proxy-9ktgx started at 2020-07-19 21:16:10 +0000 UTC (0+1 container statuses recorded) Aug 11 18:34:21.590: INFO: Container kube-proxy ready: true, restart count 0 W0811 18:34:21.597051 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 11 18:34:21.703: INFO: Latency metrics for node iruya-worker2 Aug 11 18:34:21.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3175" for this suite. Aug 11 18:34:48.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 18:34:48.602: INFO: namespace kubectl-3175 deletion completed in 26.890175466s • Failure [860.043 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Unexpected error: <*errors.errorString | 0x40030da0c0>: { s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running", } Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2151 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 11 18:34:48.697: INFO: Running AfterSuite actions on all nodes Aug 11 18:34:48.698: INFO: Running AfterSuite actions on node 1 Aug 11 18:34:48.698: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client [k8s.io] Guestbook application [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2151 Ran 1 of 4413 Specs in 862.178 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 4412 Skipped --- FAIL: TestE2E (863.04s) FAIL