I0921 10:17:18.245532 10 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0921 10:17:18.255892 10 e2e.go:129] Starting e2e run "f28976c6-96d6-4bbe-8df6-f43507655ea7" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1600683421 - Will randomize all specs Will run 303 of 5232 specs Sep 21 10:17:18.863: INFO: >>> kubeConfig: /root/.kube/config Sep 21 10:17:18.910: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 21 10:17:19.131: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 21 10:17:19.313: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 21 10:17:19.313: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 21 10:17:19.313: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 21 10:17:19.356: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 21 10:17:19.356: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 21 10:17:19.356: INFO: e2e test version: v1.19.2 Sep 21 10:17:19.361: INFO: kube-apiserver version: v1.19.0 Sep 21 10:17:19.363: INFO: >>> kubeConfig: /root/.kube/config Sep 21 10:17:19.387: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:19.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container Sep 21 10:17:19.474: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 21 10:17:19.481: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:25.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1867" for this suite. • [SLOW TEST:6.387 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":1,"skipped":37,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:25.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 21 10:17:25.894: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 21 10:17:25.932: INFO: Waiting for terminating namespaces to be deleted... Sep 21 10:17:25.940: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 21 10:17:25.958: INFO: pod-init-d718bbce-4a14-4944-b38b-6d08181fe0cb from init-container-1867 started at 2020-09-21 10:17:19 +0000 UTC (1 container statuses recorded) Sep 21 10:17:25.959: INFO: Container run1 ready: false, restart count 0 Sep 21 10:17:25.959: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:17:25.959: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 10:17:25.959: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:17:25.959: INFO: Container kube-proxy ready: true, restart count 0 Sep 21 10:17:25.959: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 21 10:17:25.966: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:17:25.966: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 10:17:25.967: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:17:25.967: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4ff06ae4-9969-4889-867f-27f452c8b39a 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4ff06ae4-9969-4889-867f-27f452c8b39a off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4ff06ae4-9969-4889-867f-27f452c8b39a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:44.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9432" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.692 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:44.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 21 10:17:44.575: INFO: Major version: 1 STEP: Confirm minor version Sep 21 10:17:44.575: INFO: cleanMinorVersion: 19 Sep 21 10:17:44.576: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:44.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-358" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":3,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:44.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:44.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-585" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":4,"skipped":84,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:44.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:17:44.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe" in namespace "downward-api-5000" to be "Succeeded or Failed" Sep 21 10:17:44.950: INFO: Pod "downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 37.023475ms Sep 21 10:17:46.958: INFO: Pod "downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04587281s Sep 21 10:17:48.968: INFO: Pod "downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055428493s STEP: Saw pod success Sep 21 10:17:48.968: INFO: Pod "downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe" satisfied condition "Succeeded or Failed" Sep 21 10:17:48.974: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe container client-container: STEP: delete the pod Sep 21 10:17:49.053: INFO: Waiting for pod downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe to disappear Sep 21 10:17:49.062: INFO: Pod downwardapi-volume-bc8b91b9-ad6d-4616-9a80-452a8409e5fe no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:49.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5000" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":5,"skipped":90,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:49.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 21 10:17:49.186: INFO: Waiting up to 5m0s for pod "pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c" in namespace "emptydir-4194" to be "Succeeded or Failed" Sep 21 10:17:49.197: INFO: Pod "pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.373952ms Sep 21 10:17:51.303: INFO: Pod "pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116795876s Sep 21 10:17:53.310: INFO: Pod "pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123820742s Sep 21 10:17:55.318: INFO: Pod "pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131652891s STEP: Saw pod success Sep 21 10:17:55.318: INFO: Pod "pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c" satisfied condition "Succeeded or Failed" Sep 21 10:17:55.324: INFO: Trying to get logs from node kali-worker2 pod pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c container test-container: STEP: delete the pod Sep 21 10:17:55.346: INFO: Waiting for pod pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c to disappear Sep 21 10:17:55.391: INFO: Pod pod-6b977fc9-ab26-47fc-938b-3daf6ad8275c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:55.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4194" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":91,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:55.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:17:55.484: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-06f68e01-e464-41d7-8816-0aec2ee342ed" in namespace "security-context-test-2755" to be "Succeeded or Failed" Sep 21 10:17:55.536: INFO: Pod "busybox-privileged-false-06f68e01-e464-41d7-8816-0aec2ee342ed": Phase="Pending", Reason="", readiness=false. Elapsed: 52.068888ms Sep 21 10:17:57.545: INFO: Pod "busybox-privileged-false-06f68e01-e464-41d7-8816-0aec2ee342ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06074384s Sep 21 10:17:59.553: INFO: Pod "busybox-privileged-false-06f68e01-e464-41d7-8816-0aec2ee342ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0684066s Sep 21 10:17:59.553: INFO: Pod "busybox-privileged-false-06f68e01-e464-41d7-8816-0aec2ee342ed" satisfied condition "Succeeded or Failed" Sep 21 10:17:59.566: INFO: Got logs for pod "busybox-privileged-false-06f68e01-e464-41d7-8816-0aec2ee342ed": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:17:59.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2755" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":95,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:17:59.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 21 10:17:59.699: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 21 10:17:59.733: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 21 10:17:59.737: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 21 10:17:59.773: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 21 10:17:59.774: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 21 10:17:59.853: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 21 10:17:59.854: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 21 10:18:07.779: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:18:07.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8140" for this suite. • [SLOW TEST:8.311 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":8,"skipped":109,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:18:07.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:18:08.059: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 21 10:18:08.089: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 21 10:18:13.113: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 21 10:18:13.114: INFO: Creating deployment "test-rolling-update-deployment" Sep 21 10:18:13.130: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 21 10:18:13.203: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set Sep 21 10:18:15.364: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 21 10:18:15.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:18:17.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736280293, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:18:19.412: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 21 10:18:19.443: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6484 /apis/apps/v1/namespaces/deployment-6484/deployments/test-rolling-update-deployment 108ffb40-446b-460a-8b6f-e83655f85bb0 2044673 1 2020-09-21 10:18:13 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-09-21 10:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 10:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8e4cef8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-21 10:18:13 +0000 UTC,LastTransitionTime:2020-09-21 10:18:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-09-21 10:18:18 +0000 UTC,LastTransitionTime:2020-09-21 10:18:13 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 21 10:18:19.454: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-6484 /apis/apps/v1/namespaces/deployment-6484/replicasets/test-rolling-update-deployment-c4cb8d6d9 c97b8445-03a3-4475-94b5-5b3c9cadd0f9 2044660 1 2020-09-21 10:18:13 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 108ffb40-446b-460a-8b6f-e83655f85bb0 0x8e4d400 0x8e4d401}] [] [{kube-controller-manager Update apps/v1 2020-09-21 10:18:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"108ffb40-446b-460a-8b6f-e83655f85bb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8e4d478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:18:19.455: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 21 10:18:19.456: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6484 /apis/apps/v1/namespaces/deployment-6484/replicasets/test-rolling-update-controller e9ef3662-6cab-4bb1-8789-519ad6e9dc17 2044672 2 2020-09-21 10:18:08 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 108ffb40-446b-460a-8b6f-e83655f85bb0 0x8e4d2f7 0x8e4d2f8}] [] [{e2e.test Update apps/v1 2020-09-21 10:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 10:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"108ffb40-446b-460a-8b6f-e83655f85bb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x8e4d398 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:18:19.474: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-4zskx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-4zskx test-rolling-update-deployment-c4cb8d6d9- deployment-6484 /api/v1/namespaces/deployment-6484/pods/test-rolling-update-deployment-c4cb8d6d9-4zskx 2c13a133-ce56-4f7b-a57d-7b8527f36ef9 2044659 0 2020-09-21 10:18:13 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 c97b8445-03a3-4475-94b5-5b3c9cadd0f9 0x8e4d8e0 0x8e4d8e1}] [] [{kube-controller-manager Update v1 2020-09-21 10:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c97b8445-03a3-4475-94b5-5b3c9cadd0f9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:18:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4lb4m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4lb4m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4lb4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:18:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:18:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:18:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:18:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.49,StartTime:2020-09-21 10:18:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:18:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://6d73b919e4e5cd2daabff4fcef645b2cf53f7153c18ccb63c298240b023c2ed3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:18:19.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6484" for this suite. • [SLOW TEST:11.597 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":9,"skipped":122,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:18:19.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:18:19.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3460" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":10,"skipped":134,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:18:19.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-66a50cc8-a0de-4748-b16a-f9ea9e1165a8 STEP: Creating a pod to test consume secrets Sep 21 10:18:19.737: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd" in namespace "projected-4990" to be "Succeeded or Failed" Sep 21 10:18:19.762: INFO: Pod "pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.77338ms Sep 21 10:18:21.769: INFO: Pod "pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031848663s Sep 21 10:18:23.777: INFO: Pod "pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039369166s STEP: Saw pod success Sep 21 10:18:23.777: INFO: Pod "pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd" satisfied condition "Succeeded or Failed" Sep 21 10:18:23.782: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd container secret-volume-test: STEP: delete the pod Sep 21 10:18:23.842: INFO: Waiting for pod pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd to disappear Sep 21 10:18:23.853: INFO: Pod pod-projected-secrets-2b74707a-35ed-41b0-b447-2511daeb80cd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:18:23.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4990" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:18:23.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 21 10:18:24.184: INFO: >>> kubeConfig: /root/.kube/config Sep 21 10:18:45.265: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:19:38.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1835" for this suite. • [SLOW TEST:74.515 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":12,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:19:38.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 21 10:19:39.049: INFO: created pod pod-service-account-defaultsa Sep 21 10:19:39.049: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 21 10:19:39.141: INFO: created pod pod-service-account-mountsa Sep 21 10:19:39.141: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 21 10:19:39.164: INFO: created pod pod-service-account-nomountsa Sep 21 10:19:39.164: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 21 10:19:39.375: INFO: created pod pod-service-account-defaultsa-mountspec Sep 21 10:19:39.375: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 21 10:19:39.525: INFO: created pod pod-service-account-mountsa-mountspec Sep 21 10:19:39.525: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 21 10:19:39.663: INFO: created pod pod-service-account-nomountsa-mountspec Sep 21 10:19:39.664: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 21 10:19:39.699: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 21 10:19:39.700: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 21 10:19:39.712: INFO: created pod pod-service-account-mountsa-nomountspec Sep 21 10:19:39.712: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 21 10:19:40.040: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 21 10:19:40.040: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:19:40.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9134" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":13,"skipped":245,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:19:40.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8178.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8178.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 10:19:56.747: INFO: DNS probes using dns-test-a87a2fc0-59d3-42e1-b506-41459806e4be succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8178.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8178.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 10:20:06.897: INFO: File wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:06.902: INFO: File jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:06.902: INFO: Lookups using dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 failed for: [wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local] Sep 21 10:20:11.909: INFO: File wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:11.915: INFO: File jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:11.916: INFO: Lookups using dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 failed for: [wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local] Sep 21 10:20:16.911: INFO: File wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:16.917: INFO: File jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:16.917: INFO: Lookups using dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 failed for: [wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local] Sep 21 10:20:21.910: INFO: File wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:21.915: INFO: File jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local from pod dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 21 10:20:21.915: INFO: Lookups using dns-8178/dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 failed for: [wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local] Sep 21 10:20:26.914: INFO: DNS probes using dns-test-ac90e21a-64f5-4d52-b135-8d1b7fe964a2 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8178.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8178.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8178.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8178.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 10:20:35.701: INFO: DNS probes using dns-test-e0276473-1446-4e1a-a861-74b6c4de9e4c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:20:35.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8178" for this suite. • [SLOW TEST:56.097 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":14,"skipped":246,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:20:36.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:20:36.730: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 21 10:20:57.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 create -f -' Sep 21 10:21:03.401: INFO: stderr: "" Sep 21 10:21:03.401: INFO: stdout: "e2e-test-crd-publish-openapi-2535-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 21 10:21:03.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 delete e2e-test-crd-publish-openapi-2535-crds test-cr' Sep 21 10:21:04.605: INFO: stderr: "" Sep 21 10:21:04.605: INFO: stdout: "e2e-test-crd-publish-openapi-2535-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 21 10:21:04.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 apply -f -' Sep 21 10:21:07.290: INFO: stderr: "" Sep 21 10:21:07.290: INFO: stdout: "e2e-test-crd-publish-openapi-2535-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 21 10:21:07.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5959 delete e2e-test-crd-publish-openapi-2535-crds test-cr' Sep 21 10:21:08.519: INFO: stderr: "" Sep 21 10:21:08.519: INFO: stdout: "e2e-test-crd-publish-openapi-2535-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 21 10:21:08.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2535-crds' Sep 21 10:21:10.858: INFO: stderr: "" Sep 21 10:21:10.858: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2535-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:21:31.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5959" for this suite. • [SLOW TEST:55.196 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":15,"skipped":257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:21:31.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 21 10:21:31.684: INFO: Waiting up to 5m0s for pod "pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c" in namespace "emptydir-193" to be "Succeeded or Failed" Sep 21 10:21:31.690: INFO: Pod "pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.392234ms Sep 21 10:21:33.698: INFO: Pod "pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013553924s Sep 21 10:21:35.704: INFO: Pod "pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019481078s STEP: Saw pod success Sep 21 10:21:35.704: INFO: Pod "pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c" satisfied condition "Succeeded or Failed" Sep 21 10:21:35.712: INFO: Trying to get logs from node kali-worker pod pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c container test-container: STEP: delete the pod Sep 21 10:21:35.764: INFO: Waiting for pod pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c to disappear Sep 21 10:21:35.776: INFO: Pod pod-4cdedd58-aa2c-442b-8cd9-f8815a29239c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:21:35.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-193" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":295,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:21:35.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3930, will wait for the garbage collector to delete the pods Sep 21 10:21:40.239: INFO: Deleting Job.batch foo took: 9.953327ms Sep 21 10:21:40.742: INFO: Terminating Job.batch foo pods took: 503.002317ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:22:23.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3930" for this suite. • [SLOW TEST:47.590 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":17,"skipped":297,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:22:23.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-cf16ae80-2b39-4894-9206-fedd23919610 STEP: Creating configMap with name cm-test-opt-upd-604b0955-b156-44d3-a388-86b654cee7eb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cf16ae80-2b39-4894-9206-fedd23919610 STEP: Updating configmap cm-test-opt-upd-604b0955-b156-44d3-a388-86b654cee7eb STEP: Creating configMap with name cm-test-opt-create-3ee8ae35-fab0-4d43-8f71-00f0000c07e2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:23:44.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3823" for this suite. • [SLOW TEST:80.908 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":300,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:23:44.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:23:44.400: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:23:45.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-192" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":19,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:23:45.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 21 10:23:45.253: INFO: Waiting up to 5m0s for pod "pod-23476c4d-31f3-4079-a5d1-9fbf224accee" in namespace "emptydir-5399" to be "Succeeded or Failed" Sep 21 10:23:45.282: INFO: Pod "pod-23476c4d-31f3-4079-a5d1-9fbf224accee": Phase="Pending", Reason="", readiness=false. Elapsed: 28.883135ms Sep 21 10:23:47.290: INFO: Pod "pod-23476c4d-31f3-4079-a5d1-9fbf224accee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036315712s Sep 21 10:23:49.579: INFO: Pod "pod-23476c4d-31f3-4079-a5d1-9fbf224accee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325363883s STEP: Saw pod success Sep 21 10:23:49.579: INFO: Pod "pod-23476c4d-31f3-4079-a5d1-9fbf224accee" satisfied condition "Succeeded or Failed" Sep 21 10:23:49.740: INFO: Trying to get logs from node kali-worker2 pod pod-23476c4d-31f3-4079-a5d1-9fbf224accee container test-container: STEP: delete the pod Sep 21 10:23:49.837: INFO: Waiting for pod pod-23476c4d-31f3-4079-a5d1-9fbf224accee to disappear Sep 21 10:23:49.864: INFO: Pod pod-23476c4d-31f3-4079-a5d1-9fbf224accee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:23:49.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5399" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":330,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:23:49.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 21 10:23:58.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 21 10:23:58.639: INFO: Pod pod-with-prestop-http-hook still exists Sep 21 10:24:00.640: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 21 10:24:00.656: INFO: Pod pod-with-prestop-http-hook still exists Sep 21 10:24:02.640: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 21 10:24:02.647: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:24:02.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6109" for this suite. • [SLOW TEST:12.754 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":21,"skipped":334,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:24:02.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-a8e238af-9c68-410b-95ba-9acfd88b1f9a STEP: Creating a pod to test consume configMaps Sep 21 10:24:02.784: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec" in namespace "projected-1867" to be "Succeeded or Failed" Sep 21 10:24:02.795: INFO: Pod "pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.394314ms Sep 21 10:24:04.842: INFO: Pod "pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057319004s Sep 21 10:24:06.850: INFO: Pod "pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec": Phase="Running", Reason="", readiness=true. Elapsed: 4.065041409s Sep 21 10:24:08.880: INFO: Pod "pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095573302s STEP: Saw pod success Sep 21 10:24:08.881: INFO: Pod "pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec" satisfied condition "Succeeded or Failed" Sep 21 10:24:08.950: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec container projected-configmap-volume-test: STEP: delete the pod Sep 21 10:24:08.994: INFO: Waiting for pod pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec to disappear Sep 21 10:24:09.003: INFO: Pod pod-projected-configmaps-0ea74404-4e2e-46da-83f0-2f816b6810ec no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:24:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1867" for this suite. • [SLOW TEST:6.342 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":341,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:24:09.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:24:25.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8650" for this suite. • [SLOW TEST:16.291 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":23,"skipped":346,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:24:25.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7726 Sep 21 10:24:29.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 21 10:24:30.906: INFO: stderr: "I0921 10:24:30.809078 135 log.go:181] (0x277a930) (0x277af50) Create stream\nI0921 10:24:30.811170 135 log.go:181] (0x277a930) (0x277af50) Stream added, broadcasting: 1\nI0921 10:24:30.821604 135 log.go:181] (0x277a930) Reply frame received for 1\nI0921 10:24:30.822344 135 log.go:181] (0x277a930) (0x277b2d0) Create stream\nI0921 10:24:30.822434 135 log.go:181] (0x277a930) (0x277b2d0) Stream added, broadcasting: 3\nI0921 10:24:30.824102 135 log.go:181] (0x277a930) Reply frame received for 3\nI0921 10:24:30.824360 135 log.go:181] (0x277a930) (0x247c930) Create stream\nI0921 10:24:30.824424 135 log.go:181] (0x277a930) (0x247c930) Stream added, broadcasting: 5\nI0921 10:24:30.825730 135 log.go:181] (0x277a930) Reply frame received for 5\nI0921 10:24:30.885216 135 log.go:181] (0x277a930) Data frame received for 5\nI0921 10:24:30.885444 135 log.go:181] (0x247c930) (5) Data frame handling\nI0921 10:24:30.885749 135 log.go:181] (0x247c930) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0921 10:24:30.887132 135 log.go:181] (0x277a930) Data frame received for 3\nI0921 10:24:30.887214 135 log.go:181] (0x277b2d0) (3) Data frame handling\nI0921 10:24:30.887311 135 log.go:181] (0x277b2d0) (3) Data frame sent\nI0921 10:24:30.887735 135 log.go:181] (0x277a930) Data frame received for 5\nI0921 10:24:30.888003 135 log.go:181] (0x247c930) (5) Data frame handling\nI0921 10:24:30.888391 135 log.go:181] (0x277a930) Data frame received for 3\nI0921 10:24:30.888509 135 log.go:181] (0x277b2d0) (3) Data frame handling\nI0921 10:24:30.890016 135 log.go:181] (0x277a930) Data frame received for 1\nI0921 10:24:30.890137 135 log.go:181] (0x277af50) (1) Data frame handling\nI0921 10:24:30.890237 135 log.go:181] (0x277af50) (1) Data frame sent\nI0921 10:24:30.891631 135 log.go:181] (0x277a930) (0x277af50) Stream removed, broadcasting: 1\nI0921 10:24:30.893030 135 log.go:181] (0x277a930) Go away received\nI0921 10:24:30.896097 135 log.go:181] (0x277a930) (0x277af50) Stream removed, broadcasting: 1\nI0921 10:24:30.896422 135 log.go:181] (0x277a930) (0x277b2d0) Stream removed, broadcasting: 3\nI0921 10:24:30.896627 135 log.go:181] (0x277a930) (0x247c930) Stream removed, broadcasting: 5\n" Sep 21 10:24:30.906: INFO: stdout: "iptables" Sep 21 10:24:30.907: INFO: proxyMode: iptables Sep 21 10:24:30.914: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:30.939: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:32.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:32.948: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:34.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:34.948: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:36.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:36.948: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:38.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:38.946: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:40.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:40.967: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:42.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:42.947: INFO: Pod kube-proxy-mode-detector still exists Sep 21 10:24:44.940: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 10:24:45.002: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7726 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7726 I0921 10:24:45.139828 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7726, replica count: 3 I0921 10:24:48.193259 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:24:51.195860 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 10:24:51.219: INFO: Creating new exec pod Sep 21 10:24:56.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 21 10:24:57.765: INFO: stderr: "I0921 10:24:57.671259 155 log.go:181] (0x302c1c0) (0x302c230) Create stream\nI0921 10:24:57.674353 155 log.go:181] (0x302c1c0) (0x302c230) Stream added, broadcasting: 1\nI0921 10:24:57.691632 155 log.go:181] (0x302c1c0) Reply frame received for 1\nI0921 10:24:57.692066 155 log.go:181] (0x302c1c0) (0x24f8310) Create stream\nI0921 10:24:57.692130 155 log.go:181] (0x302c1c0) (0x24f8310) Stream added, broadcasting: 3\nI0921 10:24:57.693603 155 log.go:181] (0x302c1c0) Reply frame received for 3\nI0921 10:24:57.693854 155 log.go:181] (0x302c1c0) (0x2d1c1c0) Create stream\nI0921 10:24:57.693919 155 log.go:181] (0x302c1c0) (0x2d1c1c0) Stream added, broadcasting: 5\nI0921 10:24:57.695033 155 log.go:181] (0x302c1c0) Reply frame received for 5\nI0921 10:24:57.744555 155 log.go:181] (0x302c1c0) Data frame received for 3\nI0921 10:24:57.744931 155 log.go:181] (0x302c1c0) Data frame received for 5\nI0921 10:24:57.745275 155 log.go:181] (0x2d1c1c0) (5) Data frame handling\nI0921 10:24:57.745821 155 log.go:181] (0x24f8310) (3) Data frame handling\nI0921 10:24:57.746013 155 log.go:181] (0x302c1c0) Data frame received for 1\nI0921 10:24:57.746156 155 log.go:181] (0x302c230) (1) Data frame handling\nI0921 10:24:57.746452 155 log.go:181] (0x2d1c1c0) (5) Data frame sent\nI0921 10:24:57.746860 155 log.go:181] (0x302c230) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0921 10:24:57.748803 155 log.go:181] (0x302c1c0) Data frame received for 5\nI0921 10:24:57.748916 155 log.go:181] (0x2d1c1c0) (5) Data frame handling\nI0921 10:24:57.749042 155 log.go:181] (0x2d1c1c0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0921 10:24:57.749133 155 log.go:181] (0x302c1c0) Data frame received for 5\nI0921 10:24:57.749390 155 log.go:181] (0x2d1c1c0) (5) Data frame handling\nI0921 10:24:57.750336 155 log.go:181] (0x302c1c0) (0x302c230) Stream removed, broadcasting: 1\nI0921 10:24:57.752412 155 log.go:181] (0x302c1c0) Go away received\nI0921 10:24:57.755385 155 log.go:181] (0x302c1c0) (0x302c230) Stream removed, broadcasting: 1\nI0921 10:24:57.755699 155 log.go:181] (0x302c1c0) (0x24f8310) Stream removed, broadcasting: 3\nI0921 10:24:57.755892 155 log.go:181] (0x302c1c0) (0x2d1c1c0) Stream removed, broadcasting: 5\n" Sep 21 10:24:57.767: INFO: stdout: "" Sep 21 10:24:57.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c nc -zv -t -w 2 10.105.226.174 80' Sep 21 10:24:59.323: INFO: stderr: "I0921 10:24:59.215751 175 log.go:181] (0x2ab6000) (0x2ab6070) Create stream\nI0921 10:24:59.220565 175 log.go:181] (0x2ab6000) (0x2ab6070) Stream added, broadcasting: 1\nI0921 10:24:59.231058 175 log.go:181] (0x2ab6000) Reply frame received for 1\nI0921 10:24:59.232308 175 log.go:181] (0x2ab6000) (0x296e070) Create stream\nI0921 10:24:59.232442 175 log.go:181] (0x2ab6000) (0x296e070) Stream added, broadcasting: 3\nI0921 10:24:59.234434 175 log.go:181] (0x2ab6000) Reply frame received for 3\nI0921 10:24:59.234905 175 log.go:181] (0x2ab6000) (0x2bb8070) Create stream\nI0921 10:24:59.235012 175 log.go:181] (0x2ab6000) (0x2bb8070) Stream added, broadcasting: 5\nI0921 10:24:59.236855 175 log.go:181] (0x2ab6000) Reply frame received for 5\nI0921 10:24:59.306502 175 log.go:181] (0x2ab6000) Data frame received for 3\nI0921 10:24:59.306936 175 log.go:181] (0x296e070) (3) Data frame handling\nI0921 10:24:59.308042 175 log.go:181] (0x2ab6000) Data frame received for 5\nI0921 10:24:59.308126 175 log.go:181] (0x2bb8070) (5) Data frame handling\nI0921 10:24:59.308621 175 log.go:181] (0x2ab6000) Data frame received for 1\nI0921 10:24:59.308708 175 log.go:181] (0x2ab6070) (1) Data frame handling\nI0921 10:24:59.309184 175 log.go:181] (0x2ab6070) (1) Data frame sent\nI0921 10:24:59.309288 175 log.go:181] (0x2bb8070) (5) Data frame sent\nI0921 10:24:59.309578 175 log.go:181] (0x2ab6000) Data frame received for 5\nI0921 10:24:59.309690 175 log.go:181] (0x2bb8070) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.226.174 80\nConnection to 10.105.226.174 80 port [tcp/http] succeeded!\nI0921 10:24:59.311054 175 log.go:181] (0x2ab6000) (0x2ab6070) Stream removed, broadcasting: 1\nI0921 10:24:59.313617 175 log.go:181] (0x2ab6000) Go away received\nI0921 10:24:59.315952 175 log.go:181] (0x2ab6000) (0x2ab6070) Stream removed, broadcasting: 1\nI0921 10:24:59.316232 175 log.go:181] (0x2ab6000) (0x296e070) Stream removed, broadcasting: 3\nI0921 10:24:59.316410 175 log.go:181] (0x2ab6000) (0x2bb8070) Stream removed, broadcasting: 5\n" Sep 21 10:24:59.324: INFO: stdout: "" Sep 21 10:24:59.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30348' Sep 21 10:25:00.845: INFO: stderr: "I0921 10:25:00.740451 195 log.go:181] (0x29b4000) (0x29b4070) Create stream\nI0921 10:25:00.745156 195 log.go:181] (0x29b4000) (0x29b4070) Stream added, broadcasting: 1\nI0921 10:25:00.756632 195 log.go:181] (0x29b4000) Reply frame received for 1\nI0921 10:25:00.757148 195 log.go:181] (0x29b4000) (0x29b4230) Create stream\nI0921 10:25:00.757232 195 log.go:181] (0x29b4000) (0x29b4230) Stream added, broadcasting: 3\nI0921 10:25:00.758664 195 log.go:181] (0x29b4000) Reply frame received for 3\nI0921 10:25:00.759097 195 log.go:181] (0x29b4000) (0x2ab0070) Create stream\nI0921 10:25:00.759157 195 log.go:181] (0x29b4000) (0x2ab0070) Stream added, broadcasting: 5\nI0921 10:25:00.760490 195 log.go:181] (0x29b4000) Reply frame received for 5\nI0921 10:25:00.826921 195 log.go:181] (0x29b4000) Data frame received for 5\nI0921 10:25:00.827248 195 log.go:181] (0x2ab0070) (5) Data frame handling\nI0921 10:25:00.827567 195 log.go:181] (0x29b4000) Data frame received for 3\nI0921 10:25:00.827812 195 log.go:181] (0x29b4230) (3) Data frame handling\nI0921 10:25:00.828377 195 log.go:181] (0x29b4000) Data frame received for 1\nI0921 10:25:00.828508 195 log.go:181] (0x29b4070) (1) Data frame handling\nI0921 10:25:00.828746 195 log.go:181] (0x2ab0070) (5) Data frame sent\nI0921 10:25:00.828888 195 log.go:181] (0x29b4070) (1) Data frame sent\nI0921 10:25:00.829782 195 log.go:181] (0x29b4000) Data frame received for 5\nI0921 10:25:00.829888 195 log.go:181] (0x2ab0070) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30348\nConnection to 172.18.0.11 30348 port [tcp/30348] succeeded!\nI0921 10:25:00.832373 195 log.go:181] (0x29b4000) (0x29b4070) Stream removed, broadcasting: 1\nI0921 10:25:00.833698 195 log.go:181] (0x29b4000) Go away received\nI0921 10:25:00.836631 195 log.go:181] (0x29b4000) (0x29b4070) Stream removed, broadcasting: 1\nI0921 10:25:00.836798 195 log.go:181] (0x29b4000) (0x29b4230) Stream removed, broadcasting: 3\nI0921 10:25:00.836954 195 log.go:181] (0x29b4000) (0x2ab0070) Stream removed, broadcasting: 5\n" Sep 21 10:25:00.846: INFO: stdout: "" Sep 21 10:25:00.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30348' Sep 21 10:25:02.345: INFO: stderr: "I0921 10:25:02.210576 215 log.go:181] (0x2f98000) (0x2f98070) Create stream\nI0921 10:25:02.214301 215 log.go:181] (0x2f98000) (0x2f98070) Stream added, broadcasting: 1\nI0921 10:25:02.226881 215 log.go:181] (0x2f98000) Reply frame received for 1\nI0921 10:25:02.227608 215 log.go:181] (0x2f98000) (0x247a7e0) Create stream\nI0921 10:25:02.227746 215 log.go:181] (0x2f98000) (0x247a7e0) Stream added, broadcasting: 3\nI0921 10:25:02.229680 215 log.go:181] (0x2f98000) Reply frame received for 3\nI0921 10:25:02.230051 215 log.go:181] (0x2f98000) (0x255d030) Create stream\nI0921 10:25:02.230137 215 log.go:181] (0x2f98000) (0x255d030) Stream added, broadcasting: 5\nI0921 10:25:02.231609 215 log.go:181] (0x2f98000) Reply frame received for 5\nI0921 10:25:02.326412 215 log.go:181] (0x2f98000) Data frame received for 3\nI0921 10:25:02.326568 215 log.go:181] (0x2f98000) Data frame received for 5\nI0921 10:25:02.326746 215 log.go:181] (0x255d030) (5) Data frame handling\nI0921 10:25:02.327048 215 log.go:181] (0x247a7e0) (3) Data frame handling\nI0921 10:25:02.327815 215 log.go:181] (0x2f98000) Data frame received for 1\nI0921 10:25:02.327954 215 log.go:181] (0x2f98070) (1) Data frame handling\nI0921 10:25:02.328337 215 log.go:181] (0x255d030) (5) Data frame sent\nI0921 10:25:02.328756 215 log.go:181] (0x2f98070) (1) Data frame sent\nI0921 10:25:02.328979 215 log.go:181] (0x2f98000) Data frame received for 5\nI0921 10:25:02.329075 215 log.go:181] (0x255d030) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30348\nConnection to 172.18.0.12 30348 port [tcp/30348] succeeded!\nI0921 10:25:02.331017 215 log.go:181] (0x2f98000) (0x2f98070) Stream removed, broadcasting: 1\nI0921 10:25:02.332741 215 log.go:181] (0x2f98000) Go away received\nI0921 10:25:02.337555 215 log.go:181] (0x2f98000) (0x2f98070) Stream removed, broadcasting: 1\nI0921 10:25:02.337740 215 log.go:181] (0x2f98000) (0x247a7e0) Stream removed, broadcasting: 3\nI0921 10:25:02.337874 215 log.go:181] (0x2f98000) (0x255d030) Stream removed, broadcasting: 5\n" Sep 21 10:25:02.346: INFO: stdout: "" Sep 21 10:25:02.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30348/ ; done' Sep 21 10:25:03.961: INFO: stderr: "I0921 10:25:03.730626 236 log.go:181] (0x247e000) (0x247e070) Create stream\nI0921 10:25:03.732821 236 log.go:181] (0x247e000) (0x247e070) Stream added, broadcasting: 1\nI0921 10:25:03.759675 236 log.go:181] (0x247e000) Reply frame received for 1\nI0921 10:25:03.760317 236 log.go:181] (0x247e000) (0x2db80e0) Create stream\nI0921 10:25:03.760398 236 log.go:181] (0x247e000) (0x2db80e0) Stream added, broadcasting: 3\nI0921 10:25:03.762194 236 log.go:181] (0x247e000) Reply frame received for 3\nI0921 10:25:03.762589 236 log.go:181] (0x247e000) (0x27d84d0) Create stream\nI0921 10:25:03.762704 236 log.go:181] (0x247e000) (0x27d84d0) Stream added, broadcasting: 5\nI0921 10:25:03.764233 236 log.go:181] (0x247e000) Reply frame received for 5\nI0921 10:25:03.849322 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.849551 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.849724 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.849927 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.850164 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.850396 236 log.go:181] (0x2db80e0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.855022 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.855125 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.855263 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.856323 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.856438 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.856554 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.856732 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.856918 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.857007 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.862843 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.863005 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.863457 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.863573 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.863697 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.863809 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.863913 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.864004 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.864117 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.870585 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.870723 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.870842 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.871327 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.871415 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.871491 236 log.go:181] (0x27d84d0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.871607 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.871673 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.871769 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.875026 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.875130 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.875259 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.875673 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.875774 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.875881 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.876066 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.876247 236 log.go:181] (0x2db80e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.876350 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.881723 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.881856 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.882003 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.882487 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.882642 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.882759 236 log.go:181] (0x27d84d0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.882868 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.882965 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.883083 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.886290 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.886403 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.886534 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.886823 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.886950 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.887085 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.887261 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.887443 236 log.go:181] (0x27d84d0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.887582 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.891647 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.891783 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.891927 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.893194 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.893388 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.893538 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.893733 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.893862 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.894069 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.897218 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.897292 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.897368 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.897881 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.897986 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.898119 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.898214 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.898274 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.898361 236 log.go:181] (0x27d84d0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.902102 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.902213 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.902336 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.902572 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.902672 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.902741 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.902847 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.902954 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.903160 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.909013 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.909147 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.909295 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.909600 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.909742 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.909860 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.909973 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.910057 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.910155 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.914172 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.914268 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.914392 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.914812 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.914895 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.914969 236 log.go:181] (0x27d84d0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.915060 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.915183 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.915320 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.919038 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.919124 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.919215 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.920238 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.920407 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.920532 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.920893 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.920973 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.921066 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.926483 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.926566 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.926659 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.927241 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.927331 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.927419 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.927529 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.927624 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.927714 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.932953 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.933073 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.933208 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.933634 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.933735 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0921 10:25:03.933882 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.934064 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.934220 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.934382 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.934489 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.934615 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.934715 236 log.go:181] (0x27d84d0) (5) Data frame sent\n http://172.18.0.11:30348/\nI0921 10:25:03.939765 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.939889 236 log.go:181] (0x27d84d0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:03.940000 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.940231 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.940381 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.940505 236 log.go:181] (0x27d84d0) (5) Data frame sent\nI0921 10:25:03.944391 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.944533 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.944635 236 log.go:181] (0x2db80e0) (3) Data frame sent\nI0921 10:25:03.944960 236 log.go:181] (0x247e000) Data frame received for 5\nI0921 10:25:03.945229 236 log.go:181] (0x27d84d0) (5) Data frame handling\nI0921 10:25:03.945349 236 log.go:181] (0x247e000) Data frame received for 3\nI0921 10:25:03.945457 236 log.go:181] (0x2db80e0) (3) Data frame handling\nI0921 10:25:03.947266 236 log.go:181] (0x247e000) Data frame received for 1\nI0921 10:25:03.947345 236 log.go:181] (0x247e070) (1) Data frame handling\nI0921 10:25:03.947440 236 log.go:181] (0x247e070) (1) Data frame sent\nI0921 10:25:03.948612 236 log.go:181] (0x247e000) (0x247e070) Stream removed, broadcasting: 1\nI0921 10:25:03.951073 236 log.go:181] (0x247e000) Go away received\nI0921 10:25:03.953574 236 log.go:181] (0x247e000) (0x247e070) Stream removed, broadcasting: 1\nI0921 10:25:03.953807 236 log.go:181] (0x247e000) (0x2db80e0) Stream removed, broadcasting: 3\nI0921 10:25:03.953969 236 log.go:181] (0x247e000) (0x27d84d0) Stream removed, broadcasting: 5\n" Sep 21 10:25:03.965: INFO: stdout: "\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw\naffinity-nodeport-timeout-wwwkw" Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.965: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.966: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.966: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.966: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.966: INFO: Received response from host: affinity-nodeport-timeout-wwwkw Sep 21 10:25:03.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30348/' Sep 21 10:25:05.459: INFO: stderr: "I0921 10:25:05.326924 257 log.go:181] (0x2e12000) (0x2e12070) Create stream\nI0921 10:25:05.330087 257 log.go:181] (0x2e12000) (0x2e12070) Stream added, broadcasting: 1\nI0921 10:25:05.340382 257 log.go:181] (0x2e12000) Reply frame received for 1\nI0921 10:25:05.340797 257 log.go:181] (0x2e12000) (0x2b38230) Create stream\nI0921 10:25:05.340853 257 log.go:181] (0x2e12000) (0x2b38230) Stream added, broadcasting: 3\nI0921 10:25:05.342476 257 log.go:181] (0x2e12000) Reply frame received for 3\nI0921 10:25:05.342890 257 log.go:181] (0x2e12000) (0x2b383f0) Create stream\nI0921 10:25:05.342985 257 log.go:181] (0x2e12000) (0x2b383f0) Stream added, broadcasting: 5\nI0921 10:25:05.344817 257 log.go:181] (0x2e12000) Reply frame received for 5\nI0921 10:25:05.435062 257 log.go:181] (0x2e12000) Data frame received for 5\nI0921 10:25:05.435579 257 log.go:181] (0x2b383f0) (5) Data frame handling\nI0921 10:25:05.436527 257 log.go:181] (0x2b383f0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:05.439316 257 log.go:181] (0x2e12000) Data frame received for 3\nI0921 10:25:05.439440 257 log.go:181] (0x2b38230) (3) Data frame handling\nI0921 10:25:05.439590 257 log.go:181] (0x2b38230) (3) Data frame sent\nI0921 10:25:05.439722 257 log.go:181] (0x2e12000) Data frame received for 3\nI0921 10:25:05.439835 257 log.go:181] (0x2b38230) (3) Data frame handling\nI0921 10:25:05.439963 257 log.go:181] (0x2e12000) Data frame received for 5\nI0921 10:25:05.440094 257 log.go:181] (0x2b383f0) (5) Data frame handling\nI0921 10:25:05.441850 257 log.go:181] (0x2e12000) Data frame received for 1\nI0921 10:25:05.442018 257 log.go:181] (0x2e12070) (1) Data frame handling\nI0921 10:25:05.442175 257 log.go:181] (0x2e12070) (1) Data frame sent\nI0921 10:25:05.444897 257 log.go:181] (0x2e12000) (0x2e12070) Stream removed, broadcasting: 1\nI0921 10:25:05.445738 257 log.go:181] (0x2e12000) Go away received\nI0921 10:25:05.450009 257 log.go:181] (0x2e12000) (0x2e12070) Stream removed, broadcasting: 1\nI0921 10:25:05.450406 257 log.go:181] (0x2e12000) (0x2b38230) Stream removed, broadcasting: 3\nI0921 10:25:05.450686 257 log.go:181] (0x2e12000) (0x2b383f0) Stream removed, broadcasting: 5\n" Sep 21 10:25:05.461: INFO: stdout: "affinity-nodeport-timeout-wwwkw" Sep 21 10:25:20.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7726 execpod-affinity48njc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30348/' Sep 21 10:25:21.929: INFO: stderr: "I0921 10:25:21.802175 277 log.go:181] (0x2da8150) (0x2da81c0) Create stream\nI0921 10:25:21.805921 277 log.go:181] (0x2da8150) (0x2da81c0) Stream added, broadcasting: 1\nI0921 10:25:21.819679 277 log.go:181] (0x2da8150) Reply frame received for 1\nI0921 10:25:21.820787 277 log.go:181] (0x2da8150) (0x2da8380) Create stream\nI0921 10:25:21.820916 277 log.go:181] (0x2da8150) (0x2da8380) Stream added, broadcasting: 3\nI0921 10:25:21.823166 277 log.go:181] (0x2da8150) Reply frame received for 3\nI0921 10:25:21.823437 277 log.go:181] (0x2da8150) (0x2da8540) Create stream\nI0921 10:25:21.823515 277 log.go:181] (0x2da8150) (0x2da8540) Stream added, broadcasting: 5\nI0921 10:25:21.824848 277 log.go:181] (0x2da8150) Reply frame received for 5\nI0921 10:25:21.910589 277 log.go:181] (0x2da8150) Data frame received for 5\nI0921 10:25:21.911033 277 log.go:181] (0x2da8540) (5) Data frame handling\nI0921 10:25:21.911846 277 log.go:181] (0x2da8540) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30348/\nI0921 10:25:21.914005 277 log.go:181] (0x2da8150) Data frame received for 3\nI0921 10:25:21.914089 277 log.go:181] (0x2da8380) (3) Data frame handling\nI0921 10:25:21.914186 277 log.go:181] (0x2da8380) (3) Data frame sent\nI0921 10:25:21.914826 277 log.go:181] (0x2da8150) Data frame received for 3\nI0921 10:25:21.914941 277 log.go:181] (0x2da8380) (3) Data frame handling\nI0921 10:25:21.915054 277 log.go:181] (0x2da8150) Data frame received for 5\nI0921 10:25:21.915156 277 log.go:181] (0x2da8540) (5) Data frame handling\nI0921 10:25:21.916979 277 log.go:181] (0x2da8150) Data frame received for 1\nI0921 10:25:21.917135 277 log.go:181] (0x2da81c0) (1) Data frame handling\nI0921 10:25:21.917296 277 log.go:181] (0x2da81c0) (1) Data frame sent\nI0921 10:25:21.917999 277 log.go:181] (0x2da8150) (0x2da81c0) Stream removed, broadcasting: 1\nI0921 10:25:21.920318 277 log.go:181] (0x2da8150) Go away received\nI0921 10:25:21.922413 277 log.go:181] (0x2da8150) (0x2da81c0) Stream removed, broadcasting: 1\nI0921 10:25:21.922631 277 log.go:181] (0x2da8150) (0x2da8380) Stream removed, broadcasting: 3\nI0921 10:25:21.922768 277 log.go:181] (0x2da8150) (0x2da8540) Stream removed, broadcasting: 5\n" Sep 21 10:25:21.931: INFO: stdout: "affinity-nodeport-timeout-8mrnt" Sep 21 10:25:21.931: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7726, will wait for the garbage collector to delete the pods Sep 21 10:25:22.058: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 15.424828ms Sep 21 10:25:22.659: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.869056ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:25:33.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7726" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:68.096 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":24,"skipped":356,"failed":0} [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:25:33.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:25:33.473: INFO: Creating deployment "webserver-deployment" Sep 21 10:25:33.493: INFO: Waiting for observed generation 1 Sep 21 10:25:35.561: INFO: Waiting for all required pods to come up Sep 21 10:25:35.572: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 21 10:25:45.590: INFO: Waiting for deployment "webserver-deployment" to complete Sep 21 10:25:45.601: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 21 10:25:45.620: INFO: Updating deployment webserver-deployment Sep 21 10:25:45.620: INFO: Waiting for observed generation 2 Sep 21 10:25:47.657: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 21 10:25:47.776: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 21 10:25:47.782: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 21 10:25:47.795: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 21 10:25:47.795: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 21 10:25:47.799: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 21 10:25:47.806: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 21 10:25:47.806: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 21 10:25:47.817: INFO: Updating deployment webserver-deployment Sep 21 10:25:47.817: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 21 10:25:47.925: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 21 10:25:50.747: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 21 10:25:51.454: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2080 /apis/apps/v1/namespaces/deployment-2080/deployments/webserver-deployment 9f9a0e38-70c8-427a-bfcb-83c3b8a64924 2047432 3 2020-09-21 10:25:33 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-21 10:25:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8cea1e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-21 10:25:47 +0000 UTC,LastTransitionTime:2020-09-21 10:25:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-09-21 10:25:49 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 21 10:25:51.616: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-2080 /apis/apps/v1/namespaces/deployment-2080/replicasets/webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 2047419 3 2020-09-21 10:25:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 9f9a0e38-70c8-427a-bfcb-83c3b8a64924 0x8cea607 0x8cea608}] [] [{kube-controller-manager Update apps/v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f9a0e38-70c8-427a-bfcb-83c3b8a64924\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8cea6a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:25:51.616: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 21 10:25:51.617: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-2080 /apis/apps/v1/namespaces/deployment-2080/replicasets/webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 2047407 3 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 9f9a0e38-70c8-427a-bfcb-83c3b8a64924 0x8cea717 0x8cea718}] [] [{kube-controller-manager Update apps/v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f9a0e38-70c8-427a-bfcb-83c3b8a64924\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8cea788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:25:51.897: INFO: Pod "webserver-deployment-795d758f88-4dpn4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4dpn4 webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-4dpn4 563a8e5d-1684-45ad-bc2f-39c6e6ed66ad 2047457 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x824daf0 0x824daf1}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.899: INFO: Pod "webserver-deployment-795d758f88-5znvh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5znvh webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-5znvh 23797254-cb33-437c-a508-bab57fd360a5 2047445 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x824dc97 0x824dc98}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.901: INFO: Pod "webserver-deployment-795d758f88-99gxw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-99gxw webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-99gxw 2d3d8fa9-a56f-4867-8350-3501b12d00ef 2047464 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x824de47 0x824de48}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.902: INFO: Pod "webserver-deployment-795d758f88-9wb6w" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9wb6w webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-9wb6w e749b4ec-b789-478e-bf14-700a8f0a8ed5 2047319 0 2020-09-21 10:25:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x64203a7 0x64203a8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.904: INFO: Pod "webserver-deployment-795d758f88-b82k9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-b82k9 webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-b82k9 49d05940-0e9b-4997-b5a7-126ad02e3612 2047332 0 2020-09-21 10:25:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x6420e97 0x6420e98}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.905: INFO: Pod "webserver-deployment-795d758f88-jnt4b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jnt4b webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-jnt4b 56c99524-a0dc-488a-928b-1f9b28750fe6 2047433 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x6421087 0x6421088}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.907: INFO: Pod "webserver-deployment-795d758f88-mtrh7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mtrh7 webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-mtrh7 9033bc6e-b035-47c2-b001-c357cf57908b 2047462 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x64214b7 0x64214b8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.908: INFO: Pod "webserver-deployment-795d758f88-n9zp5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n9zp5 webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-n9zp5 b98cbb63-d5e7-4337-8484-ee7dbebf127d 2047307 0 2020-09-21 10:25:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x64217f7 0x64217f8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.909: INFO: Pod "webserver-deployment-795d758f88-nzxzk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nzxzk webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-nzxzk c6dfd143-49d8-4639-bc31-cc716ea9c205 2047399 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x6421b57 0x6421b58}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.911: INFO: Pod "webserver-deployment-795d758f88-pqzb2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pqzb2 webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-pqzb2 766cd5a3-aaa8-4836-b367-fb09d7c32164 2047310 0 2020-09-21 10:25:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x6421db7 0x6421db8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.913: INFO: Pod "webserver-deployment-795d758f88-tcqpq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tcqpq webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-tcqpq c8d4ebc7-e12b-4398-bd4f-f1a8d0d49efc 2047404 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x658add7 0x658add8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.914: INFO: Pod "webserver-deployment-795d758f88-v2pcg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v2pcg webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-v2pcg 537b1de6-75dd-44b2-a5fe-04da1777a575 2047422 0 2020-09-21 10:25:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x658b287 0x658b288}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.915: INFO: Pod "webserver-deployment-795d758f88-xxw44" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xxw44 webserver-deployment-795d758f88- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-795d758f88-xxw44 27786744-4dd3-4f1e-bc7b-9bc43d9fedda 2047337 0 2020-09-21 10:25:45 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 3417e797-5641-4659-9fe8-847c494daa39 0x658b967 0x658b968}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3417e797-5641-4659-9fe8-847c494daa39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.917: INFO: Pod "webserver-deployment-dd94f59b7-4hrzq" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4hrzq webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-4hrzq 744685ce-8a91-4182-8096-0bbb5699649f 2047239 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x658bcc7 0x658bcc8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.71,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a75d0acaec0523c1db510b82e061f7ffc0d18a8c00596c420ec6c49cf436b0f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.918: INFO: Pod "webserver-deployment-dd94f59b7-7cmqw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7cmqw webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-7cmqw be78bab1-a75f-42f2-9c5a-73ab2126778d 2047403 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x658bfb7 0x658bfb8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.919: INFO: Pod "webserver-deployment-dd94f59b7-9lv2l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9lv2l webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-9lv2l 9b8ee21a-5665-45cf-85ba-537c971a03ea 2047265 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x6425347 0x6425348}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.72,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6973c8a0b8d4093e3f87ad5ba6b6b6f2a10aab0f02f95e5a9353dacbd3a44da0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.920: INFO: Pod "webserver-deployment-dd94f59b7-bhlp8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bhlp8 webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-bhlp8 7db9ec29-2579-45c6-8c35-0e1a06d7b15f 2047450 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x6919127 0x6919128}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.922: INFO: Pod "webserver-deployment-dd94f59b7-cvcdw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cvcdw webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-cvcdw b7934d74-f0f9-488d-bac8-c66890f0c321 2047178 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x6919d97 0x6919d98}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.47\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.47,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a53369a07b09fd2967f4853909c00aa94d3bf0b7c15b7e8af384c335bcecd1ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.923: INFO: Pod "webserver-deployment-dd94f59b7-dmrqb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dmrqb webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-dmrqb 04f39145-f99b-4588-b98a-62312cc5da4b 2047209 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x641a387 0x641a388}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.68,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7bccd37dae5bbdb69d3c431263fa67c673166a64d8acc25c96b178585e0a9eac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.924: INFO: Pod "webserver-deployment-dd94f59b7-f8jxp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-f8jxp webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-f8jxp e290dd32-63ad-4516-aa26-03c33d5c0925 2047414 0 2020-09-21 10:25:47 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x641add7 0x641add8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.925: INFO: Pod "webserver-deployment-dd94f59b7-fnvnm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fnvnm webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-fnvnm d098627f-5799-4cb5-b7d4-1ea3a70d2863 2047441 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680c1d7 0x680c1d8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.927: INFO: Pod "webserver-deployment-dd94f59b7-h2v7c" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h2v7c webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-h2v7c 7604564e-6906-4af0-bd97-a2a10a2c21aa 2047242 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680c377 0x680c378}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.49,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0bf990e5729aa331eb4a329a5148a6ccb08ce1d26aaa932fa79c179403647084,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.928: INFO: Pod "webserver-deployment-dd94f59b7-h9wgn" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h9wgn webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-h9wgn 4938a3df-4de2-4cbe-926d-7960992bd56d 2047220 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680c527 0x680c528}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.69,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4d54634f34f6b9084d4f8a80577a5b8a24c6f813ae72e5209e83f6f322c81e62,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.929: INFO: Pod "webserver-deployment-dd94f59b7-hm2z7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hm2z7 webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-hm2z7 f1692dc3-5d3c-447f-9d4a-9ab47bd242e1 2047468 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680c717 0x680c718}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.931: INFO: Pod "webserver-deployment-dd94f59b7-kfhj2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kfhj2 webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-kfhj2 d471be6c-6b3e-46c9-a29a-df8c68704580 2047430 0 2020-09-21 10:25:47 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680c9e7 0x680c9e8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.932: INFO: Pod "webserver-deployment-dd94f59b7-ndgg7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ndgg7 webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-ndgg7 413a6ae9-fdc5-446c-942c-57e6782bd0f3 2047469 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680cc97 0x680cc98}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.933: INFO: Pod "webserver-deployment-dd94f59b7-p6dmq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-p6dmq webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-p6dmq c83126b7-bd27-4be3-81f4-6a1fd5ea6aab 2047455 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680cf27 0x680cf28}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.934: INFO: Pod "webserver-deployment-dd94f59b7-pts8f" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pts8f webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-pts8f e6147b29-85c6-4fc7-a5d4-f3828522652f 2047402 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680d197 0x680d198}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.935: INFO: Pod "webserver-deployment-dd94f59b7-q42w9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-q42w9 webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-q42w9 0b407fca-2377-4279-97e1-1fff9fafca94 2047447 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680d397 0x680d398}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.936: INFO: Pod "webserver-deployment-dd94f59b7-qfm2k" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qfm2k webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-qfm2k 3dfb194f-8d72-435a-a6e8-13c372f6f84a 2047452 0 2020-09-21 10:25:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680d607 0x680d608}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.937: INFO: Pod "webserver-deployment-dd94f59b7-r4kpp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-r4kpp webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-r4kpp 819a4d7a-8a35-4d62-a2d8-2de297c93d79 2047439 0 2020-09-21 10:25:47 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680d807 0x680d808}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 10:25:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.939: INFO: Pod "webserver-deployment-dd94f59b7-sh2pw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sh2pw webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-sh2pw 8e19222f-9798-41f4-bbfd-b1b8e14ee199 2047212 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680da67 0x680da68}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.48,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://898bbdefa62a70ad1f3fb70a53ae4fbb2b5e1dd23f86d522fee5e056ef46b20e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:25:51.940: INFO: Pod "webserver-deployment-dd94f59b7-xkq6b" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xkq6b webserver-deployment-dd94f59b7- deployment-2080 /api/v1/namespaces/deployment-2080/pods/webserver-deployment-dd94f59b7-xkq6b b93a4948-1967-4303-a91d-0e4fbaaf7b18 2047231 0 2020-09-21 10:25:33 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ccc408b3-d057-43fe-8359-70e632bdf1c7 0x680dcc7 0x680dcc8}] [] [{kube-controller-manager Update v1 2020-09-21 10:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccc408b3-d057-43fe-8359-70e632bdf1c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:25:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzznt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzznt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzznt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.70,StartTime:2020-09-21 10:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:25:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f3c7bd5a0b91699e5ca6c6bc960d50164229c7a2ea31df3749c8a709747ce649,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:25:51.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2080" for this suite. • [SLOW TEST:18.995 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":25,"skipped":356,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:25:52.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:25:53.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6985' Sep 21 10:25:57.638: INFO: stderr: "" Sep 21 10:25:57.638: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 21 10:25:57.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6985' Sep 21 10:26:01.935: INFO: stderr: "" Sep 21 10:26:01.935: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 21 10:26:03.561: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:03.561: INFO: Found 0 / 1 Sep 21 10:26:04.413: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:04.413: INFO: Found 0 / 1 Sep 21 10:26:04.979: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:04.979: INFO: Found 0 / 1 Sep 21 10:26:05.952: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:05.953: INFO: Found 0 / 1 Sep 21 10:26:06.956: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:06.957: INFO: Found 1 / 1 Sep 21 10:26:06.957: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 21 10:26:06.963: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:06.963: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 21 10:26:06.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe pod agnhost-primary-lsg9p --namespace=kubectl-6985' Sep 21 10:26:08.321: INFO: stderr: "" Sep 21 10:26:08.321: INFO: stdout: "Name: agnhost-primary-lsg9p\nNamespace: kubectl-6985\nPriority: 0\nNode: kali-worker2/172.18.0.12\nStart Time: Mon, 21 Sep 2020 10:25:58 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.84\nIPs:\n IP: 10.244.2.84\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://a750a5f69c3c1253f4c2dc331ec361ef832af03a661c638c38fc5964ffb5a913\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 21 Sep 2020 10:26:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-6hbx5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-6hbx5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-6hbx5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned kubectl-6985/agnhost-primary-lsg9p to kali-worker2\n Normal Pulled 6s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Sep 21 10:26:08.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-6985' Sep 21 10:26:10.454: INFO: stderr: "" Sep 21 10:26:10.454: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6985\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replication-controller Created pod: agnhost-primary-lsg9p\n" Sep 21 10:26:10.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-6985' Sep 21 10:26:12.780: INFO: stderr: "" Sep 21 10:26:12.780: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6985\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.105.249.244\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.84:6379\nSession Affinity: None\nEvents: \n" Sep 21 10:26:12.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe node kali-control-plane' Sep 21 10:26:14.554: INFO: stderr: "" Sep 21 10:26:14.555: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 13 Sep 2020 16:56:52 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Mon, 21 Sep 2020 10:26:07 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 21 Sep 2020 10:25:03 +0000 Sun, 13 Sep 2020 16:56:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 21 Sep 2020 10:25:03 +0000 Sun, 13 Sep 2020 16:56:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 21 Sep 2020 10:25:03 +0000 Sun, 13 Sep 2020 16:56:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 21 Sep 2020 10:25:03 +0000 Sun, 13 Sep 2020 16:57:42 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.13\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 014def55fc1b49ad9a05fccd634c789f\n System UUID: d1b3cd05-ea3b-4919-8b5e-667c68c9f797\n Boot ID: 6cae8cc9-70fd-486a-9495-a1a7da130c42\n Kernel Version: 4.15.0-115-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-77lvd 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d17h\n kube-system coredns-f9fd979d6-nbdk6 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 7d17h\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d17h\n kube-system kindnet-pmkbq 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 7d17h\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 7d17h\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 7d17h\n kube-system kube-proxy-z8fp7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d17h\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 7d17h\n local-path-storage local-path-provisioner-78776bfc44-pcrjw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7d17h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Sep 21 10:26:14.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config describe namespace kubectl-6985' Sep 21 10:26:15.965: INFO: stderr: "" Sep 21 10:26:15.965: INFO: stdout: "Name: kubectl-6985\nLabels: e2e-framework=kubectl\n e2e-run=f28976c6-96d6-4bbe-8df6-f43507655ea7\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:26:15.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6985" for this suite. • [SLOW TEST:23.574 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":26,"skipped":371,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:26:15.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:26:16.804: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:26:17.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7493" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":27,"skipped":384,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:26:18.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 21 10:26:18.505: INFO: namespace kubectl-6245 Sep 21 10:26:18.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6245' Sep 21 10:26:20.687: INFO: stderr: "" Sep 21 10:26:20.688: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 21 10:26:21.806: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:21.807: INFO: Found 0 / 1 Sep 21 10:26:23.667: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:23.667: INFO: Found 0 / 1 Sep 21 10:26:24.185: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:24.185: INFO: Found 0 / 1 Sep 21 10:26:24.813: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:24.813: INFO: Found 0 / 1 Sep 21 10:26:25.703: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:25.703: INFO: Found 0 / 1 Sep 21 10:26:26.698: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:26.698: INFO: Found 0 / 1 Sep 21 10:26:27.698: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:27.699: INFO: Found 1 / 1 Sep 21 10:26:27.699: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 21 10:26:27.705: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:26:27.706: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 21 10:26:27.706: INFO: wait on agnhost-primary startup in kubectl-6245 Sep 21 10:26:27.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs agnhost-primary-65nhw agnhost-primary --namespace=kubectl-6245' Sep 21 10:26:28.940: INFO: stderr: "" Sep 21 10:26:28.940: INFO: stdout: "Paused\n" STEP: exposing RC Sep 21 10:26:28.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6245' Sep 21 10:26:30.350: INFO: stderr: "" Sep 21 10:26:30.350: INFO: stdout: "service/rm2 exposed\n" Sep 21 10:26:30.381: INFO: Service rm2 in namespace kubectl-6245 found. STEP: exposing service Sep 21 10:26:32.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6245' Sep 21 10:26:33.785: INFO: stderr: "" Sep 21 10:26:33.785: INFO: stdout: "service/rm3 exposed\n" Sep 21 10:26:33.805: INFO: Service rm3 in namespace kubectl-6245 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:26:35.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6245" for this suite. • [SLOW TEST:17.739 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":28,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:26:35.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 21 10:26:35.895: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 21 10:26:35.943: INFO: Waiting for terminating namespaces to be deleted... Sep 21 10:26:35.953: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 21 10:26:35.961: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:26:35.962: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 10:26:35.962: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:26:35.962: INFO: Container kube-proxy ready: true, restart count 0 Sep 21 10:26:35.962: INFO: agnhost-primary-65nhw from kubectl-6245 started at 2020-09-21 10:26:20 +0000 UTC (1 container statuses recorded) Sep 21 10:26:35.962: INFO: Container agnhost-primary ready: true, restart count 0 Sep 21 10:26:35.962: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 21 10:26:35.970: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:26:35.970: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 10:26:35.970: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:26:35.971: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Sep 21 10:26:36.114: INFO: Pod kindnet-jk7qk requesting resource cpu=100m on Node kali-worker Sep 21 10:26:36.114: INFO: Pod kindnet-r64bh requesting resource cpu=100m on Node kali-worker2 Sep 21 10:26:36.114: INFO: Pod kube-proxy-kz8hk requesting resource cpu=0m on Node kali-worker Sep 21 10:26:36.115: INFO: Pod kube-proxy-rnv9w requesting resource cpu=0m on Node kali-worker2 Sep 21 10:26:36.115: INFO: Pod agnhost-primary-65nhw requesting resource cpu=0m on Node kali-worker STEP: Starting Pods to consume most of the cluster CPU. Sep 21 10:26:36.115: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Sep 21 10:26:36.128: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0.1636c59d6c36d421], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0.1636c59d1e671a7e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1992/filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0 to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3.1636c59e4586a09c], Reason = [Created], Message = [Created container filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0.1636c59def0e8ff5], Reason = [Created], Message = [Created container filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0.1636c59e35d4988f], Reason = [Started], Message = [Started container filler-pod-ed5fb6b6-519c-495c-9320-1ba9a4b755b0] STEP: Considering event: Type = [Normal], Name = [filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3.1636c59dc0a77433], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3.1636c59e573917de], Reason = [Started], Message = [Started container filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3] STEP: Considering event: Type = [Normal], Name = [filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3.1636c59d20716ead], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1992/filler-pod-20ebcad8-403d-46e6-84a2-15932109b3b3 to kali-worker2] STEP: Considering event: Type = [Warning], Name = [additional-pod.1636c59e9a902a4d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1636c59ea4e94821], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:26:43.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1992" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.988 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":29,"skipped":409,"failed":0} [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:26:43.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b42fb251-9719-45df-8cea-824dc0753412 STEP: Creating a pod to test consume secrets Sep 21 10:26:44.033: INFO: Waiting up to 5m0s for pod "pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e" in namespace "secrets-392" to be "Succeeded or Failed" Sep 21 10:26:44.059: INFO: Pod "pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.839885ms Sep 21 10:26:46.106: INFO: Pod "pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071998051s Sep 21 10:26:48.219: INFO: Pod "pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185418037s Sep 21 10:26:51.088: INFO: Pod "pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.054850829s STEP: Saw pod success Sep 21 10:26:51.089: INFO: Pod "pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e" satisfied condition "Succeeded or Failed" Sep 21 10:26:51.207: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e container secret-volume-test: STEP: delete the pod Sep 21 10:26:51.376: INFO: Waiting for pod pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e to disappear Sep 21 10:26:51.387: INFO: Pod pod-secrets-7bed67b5-04f6-4269-8fff-39f7faad293e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:26:51.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-392" for this suite. • [SLOW TEST:7.598 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":409,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:26:51.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3718 STEP: creating service affinity-clusterip-transition in namespace services-3718 STEP: creating replication controller affinity-clusterip-transition in namespace services-3718 I0921 10:26:52.048008 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3718, replica count: 3 I0921 10:26:55.099869 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:26:58.100709 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 10:26:58.107: INFO: Creating new exec pod Sep 21 10:27:03.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-3718 execpod-affinityj6n8p -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 21 10:27:04.690: INFO: stderr: "I0921 10:27:04.565923 521 log.go:181] (0x29040e0) (0x2904150) Create stream\nI0921 10:27:04.569935 521 log.go:181] (0x29040e0) (0x2904150) Stream added, broadcasting: 1\nI0921 10:27:04.583524 521 log.go:181] (0x29040e0) Reply frame received for 1\nI0921 10:27:04.584402 521 log.go:181] (0x29040e0) (0x29da070) Create stream\nI0921 10:27:04.584487 521 log.go:181] (0x29040e0) (0x29da070) Stream added, broadcasting: 3\nI0921 10:27:04.586317 521 log.go:181] (0x29040e0) Reply frame received for 3\nI0921 10:27:04.586607 521 log.go:181] (0x29040e0) (0x2904380) Create stream\nI0921 10:27:04.586680 521 log.go:181] (0x29040e0) (0x2904380) Stream added, broadcasting: 5\nI0921 10:27:04.588305 521 log.go:181] (0x29040e0) Reply frame received for 5\nI0921 10:27:04.674608 521 log.go:181] (0x29040e0) Data frame received for 5\nI0921 10:27:04.674808 521 log.go:181] (0x2904380) (5) Data frame handling\nI0921 10:27:04.675000 521 log.go:181] (0x29040e0) Data frame received for 3\nI0921 10:27:04.675233 521 log.go:181] (0x29da070) (3) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0921 10:27:04.675525 521 log.go:181] (0x2904380) (5) Data frame sent\nI0921 10:27:04.676510 521 log.go:181] (0x29040e0) Data frame received for 1\nI0921 10:27:04.676665 521 log.go:181] (0x2904150) (1) Data frame handling\nI0921 10:27:04.676828 521 log.go:181] (0x29040e0) Data frame received for 5\nI0921 10:27:04.676964 521 log.go:181] (0x2904380) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0921 10:27:04.677115 521 log.go:181] (0x2904150) (1) Data frame sent\nI0921 10:27:04.677345 521 log.go:181] (0x2904380) (5) Data frame sent\nI0921 10:27:04.677446 521 log.go:181] (0x29040e0) Data frame received for 5\nI0921 10:27:04.677522 521 log.go:181] (0x2904380) (5) Data frame handling\nI0921 10:27:04.678107 521 log.go:181] (0x29040e0) (0x2904150) Stream removed, broadcasting: 1\nI0921 10:27:04.679864 521 log.go:181] (0x29040e0) Go away received\nI0921 10:27:04.683005 521 log.go:181] (0x29040e0) (0x2904150) Stream removed, broadcasting: 1\nI0921 10:27:04.683246 521 log.go:181] (0x29040e0) (0x29da070) Stream removed, broadcasting: 3\nI0921 10:27:04.683449 521 log.go:181] (0x29040e0) (0x2904380) Stream removed, broadcasting: 5\n" Sep 21 10:27:04.691: INFO: stdout: "" Sep 21 10:27:04.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-3718 execpod-affinityj6n8p -- /bin/sh -x -c nc -zv -t -w 2 10.98.53.209 80' Sep 21 10:27:06.203: INFO: stderr: "I0921 10:27:06.065876 541 log.go:181] (0x24ea000) (0x24ea070) Create stream\nI0921 10:27:06.067734 541 log.go:181] (0x24ea000) (0x24ea070) Stream added, broadcasting: 1\nI0921 10:27:06.086059 541 log.go:181] (0x24ea000) Reply frame received for 1\nI0921 10:27:06.086577 541 log.go:181] (0x24ea000) (0x30ba150) Create stream\nI0921 10:27:06.086650 541 log.go:181] (0x24ea000) (0x30ba150) Stream added, broadcasting: 3\nI0921 10:27:06.087840 541 log.go:181] (0x24ea000) Reply frame received for 3\nI0921 10:27:06.088092 541 log.go:181] (0x24ea000) (0x30ba310) Create stream\nI0921 10:27:06.088156 541 log.go:181] (0x24ea000) (0x30ba310) Stream added, broadcasting: 5\nI0921 10:27:06.089000 541 log.go:181] (0x24ea000) Reply frame received for 5\nI0921 10:27:06.186951 541 log.go:181] (0x24ea000) Data frame received for 3\nI0921 10:27:06.187303 541 log.go:181] (0x30ba150) (3) Data frame handling\nI0921 10:27:06.187460 541 log.go:181] (0x24ea000) Data frame received for 5\nI0921 10:27:06.187684 541 log.go:181] (0x30ba310) (5) Data frame handling\nI0921 10:27:06.187858 541 log.go:181] (0x24ea000) Data frame received for 1\nI0921 10:27:06.188011 541 log.go:181] (0x24ea070) (1) Data frame handling\nI0921 10:27:06.189652 541 log.go:181] (0x24ea070) (1) Data frame sent\n+ nc -zv -t -w 2 10.98.53.209 80\nConnection to 10.98.53.209 80 port [tcp/http] succeeded!\nI0921 10:27:06.190729 541 log.go:181] (0x30ba310) (5) Data frame sent\nI0921 10:27:06.190930 541 log.go:181] (0x24ea000) Data frame received for 5\nI0921 10:27:06.191101 541 log.go:181] (0x30ba310) (5) Data frame handling\nI0921 10:27:06.192301 541 log.go:181] (0x24ea000) (0x24ea070) Stream removed, broadcasting: 1\nI0921 10:27:06.194004 541 log.go:181] (0x24ea000) Go away received\nI0921 10:27:06.197077 541 log.go:181] (0x24ea000) (0x24ea070) Stream removed, broadcasting: 1\nI0921 10:27:06.197343 541 log.go:181] (0x24ea000) (0x30ba150) Stream removed, broadcasting: 3\nI0921 10:27:06.197532 541 log.go:181] (0x24ea000) (0x30ba310) Stream removed, broadcasting: 5\n" Sep 21 10:27:06.205: INFO: stdout: "" Sep 21 10:27:06.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-3718 execpod-affinityj6n8p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.53.209:80/ ; done' Sep 21 10:27:07.768: INFO: stderr: "I0921 10:27:07.575746 561 log.go:181] (0x2eadc00) (0x2eadc70) Create stream\nI0921 10:27:07.580011 561 log.go:181] (0x2eadc00) (0x2eadc70) Stream added, broadcasting: 1\nI0921 10:27:07.589971 561 log.go:181] (0x2eadc00) Reply frame received for 1\nI0921 10:27:07.590968 561 log.go:181] (0x2eadc00) (0x247c930) Create stream\nI0921 10:27:07.591096 561 log.go:181] (0x2eadc00) (0x247c930) Stream added, broadcasting: 3\nI0921 10:27:07.593365 561 log.go:181] (0x2eadc00) Reply frame received for 3\nI0921 10:27:07.593616 561 log.go:181] (0x2eadc00) (0x2eade30) Create stream\nI0921 10:27:07.593684 561 log.go:181] (0x2eadc00) (0x2eade30) Stream added, broadcasting: 5\nI0921 10:27:07.595195 561 log.go:181] (0x2eadc00) Reply frame received for 5\nI0921 10:27:07.666132 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.666370 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.666561 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.666673 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.666834 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.666985 561 log.go:181] (0x2eade30) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.669389 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.669515 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.669604 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.669816 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.669921 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.670087 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.670222 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.670334 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.670445 561 log.go:181] (0x2eade30) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0921 10:27:07.670546 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.670634 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.670760 561 log.go:181] (0x2eade30) (5) Data frame sent\n http://10.98.53.209:80/\nI0921 10:27:07.674233 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.674325 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.674416 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.674972 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.675042 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0921 10:27:07.675147 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.675253 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.675339 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.675446 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.675538 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.675600 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.675694 561 log.go:181] (0x2eade30) (5) Data frame sent\n 2 http://10.98.53.209:80/\nI0921 10:27:07.679385 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.679476 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.679588 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.680434 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.680546 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.680639 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.680746 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.680836 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.680915 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.684639 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.684755 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.684891 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.685223 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.685327 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.685396 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.685485 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.685567 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.685676 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.689930 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.690047 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.690179 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.690600 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.690680 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.690767 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.690883 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\nI0921 10:27:07.690998 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.691157 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.691295 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.691416 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.691552 561 log.go:181] (0x2eade30) (5) Data frame sent\n+ curl -q -s --connect-timeoutI0921 10:27:07.691670 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.691804 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.691939 561 log.go:181] (0x2eade30) (5) Data frame sent\n 2 http://10.98.53.209:80/\nI0921 10:27:07.696006 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.696112 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.696344 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.696628 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.696718 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.696840 561 log.go:181] (0x2eade30) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.696984 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.697106 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.697195 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.699757 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.699867 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.699978 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.700080 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.700222 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.700303 561 log.go:181] (0x2eade30) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.700645 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.700739 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.700870 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.705161 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.705274 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.705362 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.705879 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.706000 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.706111 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.706263 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.706354 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.706469 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.712011 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.712097 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.712251 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.713187 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.713363 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.713487 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.713597 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.713710 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.713859 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.718149 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.718242 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.718348 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.719290 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.719404 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.719521 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.719637 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.719710 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.719810 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.722557 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.722653 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.722791 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.722917 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.723000 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.723068 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.723130 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.723184 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.723254 561 log.go:181] (0x2eade30) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.727683 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.727788 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.727901 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.728572 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.728663 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.728822 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.728898 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.729007 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.729122 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.733152 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.733243 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.733332 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.734065 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.734196 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.734301 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.734467 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.734615 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.734741 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.739519 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.739641 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.739787 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.739921 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.740046 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.740374 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.740680 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.740780 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.740898 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.745888 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.746007 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.746131 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.746542 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.746615 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.746683 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.746782 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.746941 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.747142 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.747447 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.747530 561 log.go:181] (0x2eade30) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:07.748060 561 log.go:181] (0x2eade30) (5) Data frame sent\nI0921 10:27:07.751167 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.751263 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.751375 561 log.go:181] (0x247c930) (3) Data frame sent\nI0921 10:27:07.751698 561 log.go:181] (0x2eadc00) Data frame received for 3\nI0921 10:27:07.751776 561 log.go:181] (0x2eadc00) Data frame received for 5\nI0921 10:27:07.751916 561 log.go:181] (0x2eade30) (5) Data frame handling\nI0921 10:27:07.752038 561 log.go:181] (0x247c930) (3) Data frame handling\nI0921 10:27:07.753812 561 log.go:181] (0x2eadc00) Data frame received for 1\nI0921 10:27:07.753946 561 log.go:181] (0x2eadc70) (1) Data frame handling\nI0921 10:27:07.754079 561 log.go:181] (0x2eadc70) (1) Data frame sent\nI0921 10:27:07.754808 561 log.go:181] (0x2eadc00) (0x2eadc70) Stream removed, broadcasting: 1\nI0921 10:27:07.756697 561 log.go:181] (0x2eadc00) Go away received\nI0921 10:27:07.759581 561 log.go:181] (0x2eadc00) (0x2eadc70) Stream removed, broadcasting: 1\nI0921 10:27:07.759862 561 log.go:181] (0x2eadc00) (0x247c930) Stream removed, broadcasting: 3\nI0921 10:27:07.760042 561 log.go:181] (0x2eadc00) (0x2eade30) Stream removed, broadcasting: 5\n" Sep 21 10:27:07.773: INFO: stdout: "\naffinity-clusterip-transition-gbwg6\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-gbwg6\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-gbwg6\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-gbwg6\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-kkcsv\naffinity-clusterip-transition-kkcsv" Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-gbwg6 Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-gbwg6 Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-gbwg6 Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-gbwg6 Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.774: INFO: Received response from host: affinity-clusterip-transition-kkcsv Sep 21 10:27:07.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-3718 execpod-affinityj6n8p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.53.209:80/ ; done' Sep 21 10:27:09.374: INFO: stderr: "I0921 10:27:09.167286 581 log.go:181] (0x274c8c0) (0x274ddc0) Create stream\nI0921 10:27:09.169660 581 log.go:181] (0x274c8c0) (0x274ddc0) Stream added, broadcasting: 1\nI0921 10:27:09.191004 581 log.go:181] (0x274c8c0) Reply frame received for 1\nI0921 10:27:09.191484 581 log.go:181] (0x274c8c0) (0x28c6310) Create stream\nI0921 10:27:09.191554 581 log.go:181] (0x274c8c0) (0x28c6310) Stream added, broadcasting: 3\nI0921 10:27:09.192913 581 log.go:181] (0x274c8c0) Reply frame received for 3\nI0921 10:27:09.193193 581 log.go:181] (0x274c8c0) (0x27322a0) Create stream\nI0921 10:27:09.193264 581 log.go:181] (0x274c8c0) (0x27322a0) Stream added, broadcasting: 5\nI0921 10:27:09.194447 581 log.go:181] (0x274c8c0) Reply frame received for 5\nI0921 10:27:09.272292 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.272773 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.273048 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.273335 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.274128 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.274524 581 log.go:181] (0x28c6310) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.278508 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.278618 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.278723 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.278991 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.279088 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.279175 581 log.go:181] (0x27322a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.279279 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.279413 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.279517 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.282376 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.282486 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.282613 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.282775 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.282879 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.283010 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.283286 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.283558 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.283706 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.289827 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.289896 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.289975 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.290741 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.290998 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.291148 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.291330 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.291471 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.291660 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.295501 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.295603 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.295740 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.296040 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.296239 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.296375 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.296538 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.296694 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.296916 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.300462 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.300585 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.300799 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.301653 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.301750 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.301851 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.301995 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.302095 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.302221 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.305960 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.306100 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.306265 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.306753 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.306862 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.306939 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.307024 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.307148 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.307255 581 log.go:181] (0x27322a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.313279 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.313440 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.313597 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.313738 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.313888 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.314005 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.314116 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.314226 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.314361 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.317150 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.317300 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.317411 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.317668 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.317779 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.317896 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.318036 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.318123 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.318225 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.322349 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.322519 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.322688 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.323121 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.323248 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.323355 581 log.go:181] (0x27322a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.323460 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.323547 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.323656 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.327406 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.327562 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.327753 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.328082 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.328336 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\nI0921 10:27:09.328512 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.328641 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.328762 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.328893 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.329139 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.329245 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.329361 581 log.go:181] (0x27322a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.331638 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.331715 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.331799 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.333051 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.333182 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0921 10:27:09.333290 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.333417 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.333517 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.333609 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.333689 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.333804 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.333921 581 log.go:181] (0x27322a0) (5) Data frame sent\n http://10.98.53.209:80/\nI0921 10:27:09.335798 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.335865 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.335944 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.336546 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.336652 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.336729 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.336813 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.336898 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.337015 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.339583 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.339657 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.339736 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.340432 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.340575 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.340639 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.340699 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.340752 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.340822 581 log.go:181] (0x27322a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.344402 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.344481 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.344578 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.345405 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.345564 581 log.go:181] (0x27322a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.345687 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.345773 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.345854 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.345932 581 log.go:181] (0x27322a0) (5) Data frame sent\nI0921 10:27:09.349691 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.349768 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.349853 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.350525 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.350593 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.350670 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.350743 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.350811 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.350892 581 log.go:181] (0x27322a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.53.209:80/\nI0921 10:27:09.354733 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.354807 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.354881 581 log.go:181] (0x28c6310) (3) Data frame sent\nI0921 10:27:09.355581 581 log.go:181] (0x274c8c0) Data frame received for 3\nI0921 10:27:09.355684 581 log.go:181] (0x28c6310) (3) Data frame handling\nI0921 10:27:09.356304 581 log.go:181] (0x274c8c0) Data frame received for 5\nI0921 10:27:09.356410 581 log.go:181] (0x27322a0) (5) Data frame handling\nI0921 10:27:09.357553 581 log.go:181] (0x274c8c0) Data frame received for 1\nI0921 10:27:09.357692 581 log.go:181] (0x274ddc0) (1) Data frame handling\nI0921 10:27:09.357784 581 log.go:181] (0x274ddc0) (1) Data frame sent\nI0921 10:27:09.359869 581 log.go:181] (0x274c8c0) (0x274ddc0) Stream removed, broadcasting: 1\nI0921 10:27:09.360312 581 log.go:181] (0x274c8c0) Go away received\nI0921 10:27:09.363624 581 log.go:181] (0x274c8c0) (0x274ddc0) Stream removed, broadcasting: 1\nI0921 10:27:09.363926 581 log.go:181] (0x274c8c0) (0x28c6310) Stream removed, broadcasting: 3\nI0921 10:27:09.364260 581 log.go:181] (0x274c8c0) (0x27322a0) Stream removed, broadcasting: 5\n" Sep 21 10:27:09.380: INFO: stdout: "\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz\naffinity-clusterip-transition-hwlxz" Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.381: INFO: Received response from host: affinity-clusterip-transition-hwlxz Sep 21 10:27:09.382: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3718, will wait for the garbage collector to delete the pods Sep 21 10:27:10.138: INFO: Deleting ReplicationController affinity-clusterip-transition took: 153.137855ms Sep 21 10:27:10.439: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 300.797795ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:27:23.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3718" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:31.956 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":31,"skipped":412,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:27:23.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:27:23.477: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 21 10:27:34.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-756 create -f -' Sep 21 10:27:39.939: INFO: stderr: "" Sep 21 10:27:39.940: INFO: stdout: "e2e-test-crd-publish-openapi-2621-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 21 10:27:39.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-756 delete e2e-test-crd-publish-openapi-2621-crds test-cr' Sep 21 10:27:41.181: INFO: stderr: "" Sep 21 10:27:41.181: INFO: stdout: "e2e-test-crd-publish-openapi-2621-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 21 10:27:41.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-756 apply -f -' Sep 21 10:27:44.019: INFO: stderr: "" Sep 21 10:27:44.019: INFO: stdout: "e2e-test-crd-publish-openapi-2621-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 21 10:27:44.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-756 delete e2e-test-crd-publish-openapi-2621-crds test-cr' Sep 21 10:27:45.301: INFO: stderr: "" Sep 21 10:27:45.302: INFO: stdout: "e2e-test-crd-publish-openapi-2621-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 21 10:27:45.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2621-crds' Sep 21 10:27:48.106: INFO: stderr: "" Sep 21 10:27:48.106: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2621-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:27:58.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-756" for this suite. • [SLOW TEST:35.494 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":32,"skipped":413,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:27:58.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 21 10:27:59.006: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:27:59.058: INFO: Number of nodes with available pods: 0 Sep 21 10:27:59.059: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:00.069: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:00.076: INFO: Number of nodes with available pods: 0 Sep 21 10:28:00.076: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:01.070: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:01.077: INFO: Number of nodes with available pods: 0 Sep 21 10:28:01.077: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:02.125: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:02.131: INFO: Number of nodes with available pods: 0 Sep 21 10:28:02.131: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:03.073: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:03.079: INFO: Number of nodes with available pods: 0 Sep 21 10:28:03.079: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:04.072: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:04.079: INFO: Number of nodes with available pods: 1 Sep 21 10:28:04.079: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:05.070: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:05.076: INFO: Number of nodes with available pods: 2 Sep 21 10:28:05.077: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 21 10:28:05.116: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:05.145: INFO: Number of nodes with available pods: 1 Sep 21 10:28:05.145: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:06.158: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:06.170: INFO: Number of nodes with available pods: 1 Sep 21 10:28:06.170: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:07.157: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:07.164: INFO: Number of nodes with available pods: 1 Sep 21 10:28:07.164: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:08.156: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:08.164: INFO: Number of nodes with available pods: 1 Sep 21 10:28:08.164: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:09.159: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:09.166: INFO: Number of nodes with available pods: 1 Sep 21 10:28:09.166: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:10.158: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:10.163: INFO: Number of nodes with available pods: 1 Sep 21 10:28:10.164: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:11.159: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:11.167: INFO: Number of nodes with available pods: 1 Sep 21 10:28:11.167: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:12.155: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:12.162: INFO: Number of nodes with available pods: 1 Sep 21 10:28:12.162: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:13.188: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:13.214: INFO: Number of nodes with available pods: 1 Sep 21 10:28:13.214: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:14.203: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:14.261: INFO: Number of nodes with available pods: 1 Sep 21 10:28:14.261: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:15.154: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:15.159: INFO: Number of nodes with available pods: 1 Sep 21 10:28:15.159: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:16.159: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:16.166: INFO: Number of nodes with available pods: 1 Sep 21 10:28:16.166: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:17.158: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:17.165: INFO: Number of nodes with available pods: 1 Sep 21 10:28:17.166: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:28:18.158: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:28:18.165: INFO: Number of nodes with available pods: 2 Sep 21 10:28:18.165: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9314, will wait for the garbage collector to delete the pods Sep 21 10:28:18.236: INFO: Deleting DaemonSet.extensions daemon-set took: 9.084482ms Sep 21 10:28:18.737: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.925982ms Sep 21 10:28:33.253: INFO: Number of nodes with available pods: 0 Sep 21 10:28:33.253: INFO: Number of running nodes: 0, number of available pods: 0 Sep 21 10:28:33.276: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9314/daemonsets","resourceVersion":"2048865"},"items":null} Sep 21 10:28:33.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9314/pods","resourceVersion":"2048865"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:28:33.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9314" for this suite. • [SLOW TEST:34.442 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":33,"skipped":417,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:28:33.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:28:33.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159" in namespace "downward-api-5196" to be "Succeeded or Failed" Sep 21 10:28:33.553: INFO: Pod "downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159": Phase="Pending", Reason="", readiness=false. Elapsed: 12.440443ms Sep 21 10:28:35.561: INFO: Pod "downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02040133s Sep 21 10:28:37.568: INFO: Pod "downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027524593s STEP: Saw pod success Sep 21 10:28:37.568: INFO: Pod "downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159" satisfied condition "Succeeded or Failed" Sep 21 10:28:37.595: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159 container client-container: STEP: delete the pod Sep 21 10:28:37.647: INFO: Waiting for pod downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159 to disappear Sep 21 10:28:37.660: INFO: Pod downwardapi-volume-a9fd59b9-6ce6-4ba0-9bd8-46cf5f1b5159 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:28:37.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5196" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:28:37.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 21 10:28:37.771: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:28:46.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5580" for this suite. • [SLOW TEST:8.497 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":35,"skipped":453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:28:46.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-467d41de-7a59-4bcf-b8b5-c6a75c7b79fe STEP: Creating a pod to test consume configMaps Sep 21 10:28:46.294: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e" in namespace "projected-3310" to be "Succeeded or Failed" Sep 21 10:28:46.308: INFO: Pod "pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.972268ms Sep 21 10:28:48.316: INFO: Pod "pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021367996s Sep 21 10:28:50.322: INFO: Pod "pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e": Phase="Running", Reason="", readiness=true. Elapsed: 4.027073207s Sep 21 10:28:52.771: INFO: Pod "pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.476127085s STEP: Saw pod success Sep 21 10:28:52.771: INFO: Pod "pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e" satisfied condition "Succeeded or Failed" Sep 21 10:28:52.777: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e container projected-configmap-volume-test: STEP: delete the pod Sep 21 10:28:53.150: INFO: Waiting for pod pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e to disappear Sep 21 10:28:53.470: INFO: Pod pod-projected-configmaps-fcb4574f-9048-449e-b5f8-8a30ed28d40e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:28:53.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3310" for this suite. • [SLOW TEST:7.477 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:28:53.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:28:53.720: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 21 10:29:04.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 create -f -' Sep 21 10:29:11.055: INFO: stderr: "" Sep 21 10:29:11.056: INFO: stdout: "e2e-test-crd-publish-openapi-3023-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 21 10:29:11.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 delete e2e-test-crd-publish-openapi-3023-crds test-foo' Sep 21 10:29:12.851: INFO: stderr: "" Sep 21 10:29:12.851: INFO: stdout: "e2e-test-crd-publish-openapi-3023-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 21 10:29:12.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 apply -f -' Sep 21 10:29:15.330: INFO: stderr: "" Sep 21 10:29:15.330: INFO: stdout: "e2e-test-crd-publish-openapi-3023-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 21 10:29:15.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 delete e2e-test-crd-publish-openapi-3023-crds test-foo' Sep 21 10:29:16.705: INFO: stderr: "" Sep 21 10:29:16.705: INFO: stdout: "e2e-test-crd-publish-openapi-3023-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 21 10:29:16.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 create -f -' Sep 21 10:29:19.001: INFO: rc: 1 Sep 21 10:29:19.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 apply -f -' Sep 21 10:29:20.842: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 21 10:29:20.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 create -f -' Sep 21 10:29:23.278: INFO: rc: 1 Sep 21 10:29:23.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4403 apply -f -' Sep 21 10:29:25.803: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 21 10:29:25.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3023-crds' Sep 21 10:29:29.242: INFO: stderr: "" Sep 21 10:29:29.242: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3023-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 21 10:29:29.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3023-crds.metadata' Sep 21 10:29:31.394: INFO: stderr: "" Sep 21 10:29:31.394: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3023-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 21 10:29:31.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3023-crds.spec' Sep 21 10:29:33.899: INFO: stderr: "" Sep 21 10:29:33.900: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3023-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 21 10:29:33.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3023-crds.spec.bars' Sep 21 10:29:36.412: INFO: stderr: "" Sep 21 10:29:36.412: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3023-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 21 10:29:36.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3023-crds.spec.bars2' Sep 21 10:29:38.775: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:29:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4403" for this suite. • [SLOW TEST:65.862 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":37,"skipped":509,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:29:59.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:29:59.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f" in namespace "projected-7162" to be "Succeeded or Failed" Sep 21 10:29:59.657: INFO: Pod "downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.186853ms Sep 21 10:30:01.970: INFO: Pod "downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318134376s Sep 21 10:30:03.977: INFO: Pod "downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325332249s STEP: Saw pod success Sep 21 10:30:03.977: INFO: Pod "downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f" satisfied condition "Succeeded or Failed" Sep 21 10:30:03.982: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f container client-container: STEP: delete the pod Sep 21 10:30:04.070: INFO: Waiting for pod downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f to disappear Sep 21 10:30:04.082: INFO: Pod downwardapi-volume-d2dfbb1f-d9eb-406d-b780-0c621905482f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:30:04.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7162" for this suite. • [SLOW TEST:5.274 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:30:04.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 10:30:10.937: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 10:30:12.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281010, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281010, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281011, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281010, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 10:30:16.082: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:30:16.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1493" for this suite. STEP: Destroying namespace "webhook-1493-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.724 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":39,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:30:16.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-30abffac-61e0-4845-9959-d45e624abc32 STEP: Creating a pod to test consume secrets Sep 21 10:30:16.595: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21" in namespace "projected-5906" to be "Succeeded or Failed" Sep 21 10:30:16.609: INFO: Pod "pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21": Phase="Pending", Reason="", readiness=false. Elapsed: 13.434389ms Sep 21 10:30:18.618: INFO: Pod "pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022481656s Sep 21 10:30:20.672: INFO: Pod "pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21": Phase="Running", Reason="", readiness=true. Elapsed: 4.076625674s Sep 21 10:30:22.681: INFO: Pod "pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085703636s STEP: Saw pod success Sep 21 10:30:22.681: INFO: Pod "pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21" satisfied condition "Succeeded or Failed" Sep 21 10:30:22.688: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21 container projected-secret-volume-test: STEP: delete the pod Sep 21 10:30:22.714: INFO: Waiting for pod pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21 to disappear Sep 21 10:30:22.718: INFO: Pod pod-projected-secrets-140d6411-1038-4b03-a9de-94ca69063b21 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:30:22.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5906" for this suite. • [SLOW TEST:6.202 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:30:22.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 21 10:30:30.901: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:30.923: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:32.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:32.949: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:34.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:34.932: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:36.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:36.934: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:38.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:38.934: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:40.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:41.241: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:42.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:42.932: INFO: Pod pod-with-poststart-exec-hook still exists Sep 21 10:30:44.924: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 21 10:30:44.931: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:30:44.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9474" for this suite. • [SLOW TEST:22.212 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":594,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:30:44.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-b904f8cf-8fb9-4ac8-b6ed-6a223a2b077d STEP: Creating a pod to test consume configMaps Sep 21 10:30:45.047: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c" in namespace "projected-1947" to be "Succeeded or Failed" Sep 21 10:30:45.062: INFO: Pod "pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.947355ms Sep 21 10:30:47.161: INFO: Pod "pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114127686s Sep 21 10:30:49.168: INFO: Pod "pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c": Phase="Running", Reason="", readiness=true. Elapsed: 4.121489183s Sep 21 10:30:51.175: INFO: Pod "pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128320671s STEP: Saw pod success Sep 21 10:30:51.175: INFO: Pod "pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c" satisfied condition "Succeeded or Failed" Sep 21 10:30:51.180: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c container projected-configmap-volume-test: STEP: delete the pod Sep 21 10:30:51.227: INFO: Waiting for pod pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c to disappear Sep 21 10:30:51.240: INFO: Pod pod-projected-configmaps-a44d59b7-41cc-4bb5-9bcb-fe0c36c5e78c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:30:51.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1947" for this suite. • [SLOW TEST:6.402 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":603,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:30:51.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 21 10:30:51.429: INFO: Created pod &Pod{ObjectMeta:{dns-771 dns-771 /api/v1/namespaces/dns-771/pods/dns-771 e42fe248-08e8-49aa-87b9-f7aef2f2598a 2049779 0 2020-09-21 10:30:51 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-09-21 10:30:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t8w9k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t8w9k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t8w9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 21 10:30:51.445: INFO: The status of Pod dns-771 is Pending, waiting for it to be Running (with Ready = true) Sep 21 10:30:53.474: INFO: The status of Pod dns-771 is Pending, waiting for it to be Running (with Ready = true) Sep 21 10:30:55.454: INFO: The status of Pod dns-771 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 21 10:30:55.455: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-771 PodName:dns-771 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 10:30:55.456: INFO: >>> kubeConfig: /root/.kube/config I0921 10:30:55.565504 10 log.go:181] (0x7d929a0) (0x7d92a10) Create stream I0921 10:30:55.566084 10 log.go:181] (0x7d929a0) (0x7d92a10) Stream added, broadcasting: 1 I0921 10:30:55.583581 10 log.go:181] (0x7d929a0) Reply frame received for 1 I0921 10:30:55.584645 10 log.go:181] (0x7d929a0) (0x7d92bd0) Create stream I0921 10:30:55.584830 10 log.go:181] (0x7d929a0) (0x7d92bd0) Stream added, broadcasting: 3 I0921 10:30:55.586851 10 log.go:181] (0x7d929a0) Reply frame received for 3 I0921 10:30:55.587070 10 log.go:181] (0x7d929a0) (0x7d92d90) Create stream I0921 10:30:55.587133 10 log.go:181] (0x7d929a0) (0x7d92d90) Stream added, broadcasting: 5 I0921 10:30:55.588511 10 log.go:181] (0x7d929a0) Reply frame received for 5 I0921 10:30:55.689495 10 log.go:181] (0x7d929a0) Data frame received for 5 I0921 10:30:55.689959 10 log.go:181] (0x7d929a0) Data frame received for 3 I0921 10:30:55.690214 10 log.go:181] (0x7d92bd0) (3) Data frame handling I0921 10:30:55.690356 10 log.go:181] (0x7d92d90) (5) Data frame handling I0921 10:30:55.691037 10 log.go:181] (0x7d92bd0) (3) Data frame sent I0921 10:30:55.691912 10 log.go:181] (0x7d929a0) Data frame received for 1 I0921 10:30:55.692129 10 log.go:181] (0x7d92a10) (1) Data frame handling I0921 10:30:55.692449 10 log.go:181] (0x7d92a10) (1) Data frame sent I0921 10:30:55.692666 10 log.go:181] (0x7d929a0) Data frame received for 3 I0921 10:30:55.692857 10 log.go:181] (0x7d92bd0) (3) Data frame handling I0921 10:30:55.694294 10 log.go:181] (0x7d929a0) (0x7d92a10) Stream removed, broadcasting: 1 I0921 10:30:55.696457 10 log.go:181] (0x7d929a0) Go away received I0921 10:30:55.698227 10 log.go:181] (0x7d929a0) (0x7d92a10) Stream removed, broadcasting: 1 I0921 10:30:55.698448 10 log.go:181] (0x7d929a0) (0x7d92bd0) Stream removed, broadcasting: 3 I0921 10:30:55.698634 10 log.go:181] (0x7d929a0) (0x7d92d90) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Sep 21 10:30:55.699: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-771 PodName:dns-771 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 10:30:55.699: INFO: >>> kubeConfig: /root/.kube/config I0921 10:30:55.804998 10 log.go:181] (0xa82c0e0) (0xa82c150) Create stream I0921 10:30:55.805206 10 log.go:181] (0xa82c0e0) (0xa82c150) Stream added, broadcasting: 1 I0921 10:30:55.810749 10 log.go:181] (0xa82c0e0) Reply frame received for 1 I0921 10:30:55.811048 10 log.go:181] (0xa82c0e0) (0xa82c310) Create stream I0921 10:30:55.811132 10 log.go:181] (0xa82c0e0) (0xa82c310) Stream added, broadcasting: 3 I0921 10:30:55.813247 10 log.go:181] (0xa82c0e0) Reply frame received for 3 I0921 10:30:55.813477 10 log.go:181] (0xa82c0e0) (0xa82c4d0) Create stream I0921 10:30:55.813593 10 log.go:181] (0xa82c0e0) (0xa82c4d0) Stream added, broadcasting: 5 I0921 10:30:55.815833 10 log.go:181] (0xa82c0e0) Reply frame received for 5 I0921 10:30:55.877113 10 log.go:181] (0xa82c0e0) Data frame received for 3 I0921 10:30:55.877402 10 log.go:181] (0xa82c310) (3) Data frame handling I0921 10:30:55.877534 10 log.go:181] (0xa82c310) (3) Data frame sent I0921 10:30:55.877624 10 log.go:181] (0xa82c0e0) Data frame received for 3 I0921 10:30:55.877684 10 log.go:181] (0xa82c310) (3) Data frame handling I0921 10:30:55.877848 10 log.go:181] (0xa82c0e0) Data frame received for 5 I0921 10:30:55.878090 10 log.go:181] (0xa82c4d0) (5) Data frame handling I0921 10:30:55.880456 10 log.go:181] (0xa82c0e0) Data frame received for 1 I0921 10:30:55.880546 10 log.go:181] (0xa82c150) (1) Data frame handling I0921 10:30:55.880635 10 log.go:181] (0xa82c150) (1) Data frame sent I0921 10:30:55.880767 10 log.go:181] (0xa82c0e0) (0xa82c150) Stream removed, broadcasting: 1 I0921 10:30:55.880923 10 log.go:181] (0xa82c0e0) Go away received I0921 10:30:55.882826 10 log.go:181] (0xa82c0e0) (0xa82c150) Stream removed, broadcasting: 1 I0921 10:30:55.883041 10 log.go:181] (0xa82c0e0) (0xa82c310) Stream removed, broadcasting: 3 I0921 10:30:55.883170 10 log.go:181] (0xa82c0e0) (0xa82c4d0) Stream removed, broadcasting: 5 Sep 21 10:30:55.883: INFO: Deleting pod dns-771... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:30:55.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-771" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":43,"skipped":606,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:30:55.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-33a8abc5-896f-4078-b08a-8e0c9b63dc39 STEP: Creating a pod to test consume configMaps Sep 21 10:30:56.318: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493" in namespace "projected-8749" to be "Succeeded or Failed" Sep 21 10:30:56.373: INFO: Pod "pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493": Phase="Pending", Reason="", readiness=false. Elapsed: 53.950401ms Sep 21 10:30:58.381: INFO: Pod "pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062589856s Sep 21 10:31:00.390: INFO: Pod "pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493": Phase="Running", Reason="", readiness=true. Elapsed: 4.071341346s Sep 21 10:31:02.399: INFO: Pod "pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080853535s STEP: Saw pod success Sep 21 10:31:02.400: INFO: Pod "pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493" satisfied condition "Succeeded or Failed" Sep 21 10:31:02.405: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493 container projected-configmap-volume-test: STEP: delete the pod Sep 21 10:31:02.453: INFO: Waiting for pod pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493 to disappear Sep 21 10:31:02.482: INFO: Pod pod-projected-configmaps-f4e3ae4a-a16f-4e68-a449-68d8a8e31493 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:31:02.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8749" for this suite. • [SLOW TEST:6.522 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:31:02.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 21 10:31:02.618: INFO: Waiting up to 5m0s for pod "var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a" in namespace "var-expansion-7439" to be "Succeeded or Failed" Sep 21 10:31:02.639: INFO: Pod "var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.095914ms Sep 21 10:31:04.647: INFO: Pod "var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029437998s Sep 21 10:31:06.655: INFO: Pod "var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03720775s STEP: Saw pod success Sep 21 10:31:06.655: INFO: Pod "var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a" satisfied condition "Succeeded or Failed" Sep 21 10:31:06.660: INFO: Trying to get logs from node kali-worker2 pod var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a container dapi-container: STEP: delete the pod Sep 21 10:31:06.714: INFO: Waiting for pod var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a to disappear Sep 21 10:31:06.745: INFO: Pod var-expansion-44656011-858b-4432-8c0f-d3fd8c5b140a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:31:06.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7439" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":45,"skipped":634,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:31:06.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 21 10:31:06.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3433' Sep 21 10:31:09.383: INFO: stderr: "" Sep 21 10:31:09.383: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 21 10:31:10.392: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:31:10.392: INFO: Found 0 / 1 Sep 21 10:31:11.392: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:31:11.392: INFO: Found 0 / 1 Sep 21 10:31:12.393: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:31:12.393: INFO: Found 0 / 1 Sep 21 10:31:13.392: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:31:13.392: INFO: Found 1 / 1 Sep 21 10:31:13.393: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 21 10:31:13.399: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:31:13.399: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 21 10:31:13.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config patch pod agnhost-primary-zjzg2 --namespace=kubectl-3433 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 21 10:31:14.761: INFO: stderr: "" Sep 21 10:31:14.761: INFO: stdout: "pod/agnhost-primary-zjzg2 patched\n" STEP: checking annotations Sep 21 10:31:14.770: INFO: Selector matched 1 pods for map[app:agnhost] Sep 21 10:31:14.771: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:31:14.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3433" for this suite. • [SLOW TEST:8.027 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":46,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:31:14.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 21 10:31:21.303: INFO: 10 pods remaining Sep 21 10:31:21.303: INFO: 10 pods has nil DeletionTimestamp Sep 21 10:31:21.303: INFO: Sep 21 10:31:23.570: INFO: 0 pods remaining Sep 21 10:31:23.570: INFO: 0 pods has nil DeletionTimestamp Sep 21 10:31:23.571: INFO: STEP: Gathering metrics W0921 10:31:24.545238 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 21 10:32:26.575: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:32:26.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2460" for this suite. • [SLOW TEST:71.801 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":47,"skipped":680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:32:26.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 10:32:30.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 10:32:33.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:32:35.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281150, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 10:32:38.107: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 21 10:32:38.149: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:32:38.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-335" for this suite. STEP: Destroying namespace "webhook-335-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.287 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":48,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:32:38.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:32:39.021: INFO: Create a RollingUpdate DaemonSet Sep 21 10:32:39.028: INFO: Check that daemon pods launch on every node of the cluster Sep 21 10:32:39.052: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:39.062: INFO: Number of nodes with available pods: 0 Sep 21 10:32:39.062: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:32:40.074: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:40.081: INFO: Number of nodes with available pods: 0 Sep 21 10:32:40.081: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:32:41.075: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:41.179: INFO: Number of nodes with available pods: 0 Sep 21 10:32:41.180: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:32:42.380: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:43.044: INFO: Number of nodes with available pods: 0 Sep 21 10:32:43.044: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:32:43.108: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:43.138: INFO: Number of nodes with available pods: 0 Sep 21 10:32:43.138: INFO: Node kali-worker is running more than one daemon pod Sep 21 10:32:44.072: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:44.101: INFO: Number of nodes with available pods: 2 Sep 21 10:32:44.101: INFO: Number of running nodes: 2, number of available pods: 2 Sep 21 10:32:44.101: INFO: Update the DaemonSet to trigger a rollout Sep 21 10:32:44.123: INFO: Updating DaemonSet daemon-set Sep 21 10:32:54.158: INFO: Roll back the DaemonSet before rollout is complete Sep 21 10:32:54.174: INFO: Updating DaemonSet daemon-set Sep 21 10:32:54.174: INFO: Make sure DaemonSet rollback is complete Sep 21 10:32:54.184: INFO: Wrong image for pod: daemon-set-lbmfn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 21 10:32:54.184: INFO: Pod daemon-set-lbmfn is not available Sep 21 10:32:54.243: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:55.252: INFO: Wrong image for pod: daemon-set-lbmfn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 21 10:32:55.252: INFO: Pod daemon-set-lbmfn is not available Sep 21 10:32:55.261: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 10:32:56.294: INFO: Pod daemon-set-kgtlb is not available Sep 21 10:32:56.313: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8413, will wait for the garbage collector to delete the pods Sep 21 10:32:56.388: INFO: Deleting DaemonSet.extensions daemon-set took: 9.14484ms Sep 21 10:32:56.889: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.853867ms Sep 21 10:33:03.296: INFO: Number of nodes with available pods: 0 Sep 21 10:33:03.296: INFO: Number of running nodes: 0, number of available pods: 0 Sep 21 10:33:03.301: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8413/daemonsets","resourceVersion":"2050613"},"items":null} Sep 21 10:33:03.306: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8413/pods","resourceVersion":"2050613"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:03.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8413" for this suite. • [SLOW TEST:24.453 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":49,"skipped":756,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:03.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:33:03.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1" in namespace "projected-1240" to be "Succeeded or Failed" Sep 21 10:33:03.436: INFO: Pod "downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.57827ms Sep 21 10:33:05.449: INFO: Pod "downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025177957s Sep 21 10:33:07.457: INFO: Pod "downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033544975s STEP: Saw pod success Sep 21 10:33:07.457: INFO: Pod "downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1" satisfied condition "Succeeded or Failed" Sep 21 10:33:07.462: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1 container client-container: STEP: delete the pod Sep 21 10:33:07.511: INFO: Waiting for pod downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1 to disappear Sep 21 10:33:07.525: INFO: Pod downwardapi-volume-842c903f-6e72-45de-ad20-1dc47fe6ffa1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:07.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1240" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":768,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:07.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 21 10:33:11.682: INFO: &Pod{ObjectMeta:{send-events-e2cb5252-7347-445a-be50-405ed85e2edf events-4677 /api/v1/namespaces/events-4677/pods/send-events-e2cb5252-7347-445a-be50-405ed85e2edf 60f9c462-889c-49c1-b899-0fafc229e617 2050687 0 2020-09-21 10:33:07 +0000 UTC map[name:foo time:620992844] map[] [] [] [{e2e.test Update v1 2020-09-21 10:33:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:33:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2r4zn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2r4zn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2r4zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:33:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:33:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:33:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:33:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.1.93,StartTime:2020-09-21 10:33:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:33:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://8ce9064409d3253273f365341140db1056c533e53a13d628ddf41dff0fba3642,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 21 10:33:13.779: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 21 10:33:15.789: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:15.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4677" for this suite. • [SLOW TEST:8.337 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":51,"skipped":776,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:15.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:33:16.041: INFO: Creating ReplicaSet my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f Sep 21 10:33:16.051: INFO: Pod name my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f: Found 0 pods out of 1 Sep 21 10:33:21.067: INFO: Pod name my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f: Found 1 pods out of 1 Sep 21 10:33:21.067: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f" is running Sep 21 10:33:21.072: INFO: Pod "my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f-6pg8x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 10:33:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 10:33:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 10:33:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 10:33:16 +0000 UTC Reason: Message:}]) Sep 21 10:33:21.075: INFO: Trying to dial the pod Sep 21 10:33:26.099: INFO: Controller my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f: Got expected result from replica 1 [my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f-6pg8x]: "my-hostname-basic-df668a88-4245-4e1a-9504-41e0de4e3a7f-6pg8x", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:26.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9194" for this suite. • [SLOW TEST:10.234 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":52,"skipped":796,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:26.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 21 10:33:30.873: INFO: Successfully updated pod "labelsupdatea9d68ecc-dd7a-44d1-868b-5c4b8e5049ee" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:32.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2874" for this suite. • [SLOW TEST:6.806 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":810,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:32.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 10:33:42.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 10:33:44.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281222, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281222, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281222, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281222, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 10:33:47.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:47.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-340" for this suite. STEP: Destroying namespace "webhook-340-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.134 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":54,"skipped":826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:48.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 21 10:33:48.202: INFO: Waiting up to 5m0s for pod "pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d" in namespace "emptydir-5481" to be "Succeeded or Failed" Sep 21 10:33:48.214: INFO: Pod "pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.324314ms Sep 21 10:33:50.222: INFO: Pod "pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019141511s Sep 21 10:33:52.229: INFO: Pod "pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026191587s STEP: Saw pod success Sep 21 10:33:52.229: INFO: Pod "pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d" satisfied condition "Succeeded or Failed" Sep 21 10:33:52.233: INFO: Trying to get logs from node kali-worker pod pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d container test-container: STEP: delete the pod Sep 21 10:33:52.294: INFO: Waiting for pod pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d to disappear Sep 21 10:33:52.389: INFO: Pod pod-73b9c1c5-03d0-4e9e-8760-e345b865ee7d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:52.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:52.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 21 10:33:57.060: INFO: Successfully updated pod "annotationupdate47ad2dac-c4ab-490f-9198-2e5087681111" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:33:59.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5603" for this suite. • [SLOW TEST:6.713 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":917,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:33:59.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a7684cf1-b414-4444-9b8a-b686cb2b88b7 STEP: Creating a pod to test consume secrets Sep 21 10:33:59.199: INFO: Waiting up to 5m0s for pod "pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed" in namespace "secrets-2326" to be "Succeeded or Failed" Sep 21 10:33:59.245: INFO: Pod "pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed": Phase="Pending", Reason="", readiness=false. Elapsed: 45.54507ms Sep 21 10:34:01.254: INFO: Pod "pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054549811s Sep 21 10:34:03.263: INFO: Pod "pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063682532s STEP: Saw pod success Sep 21 10:34:03.264: INFO: Pod "pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed" satisfied condition "Succeeded or Failed" Sep 21 10:34:03.270: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed container secret-volume-test: STEP: delete the pod Sep 21 10:34:03.335: INFO: Waiting for pod pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed to disappear Sep 21 10:34:03.340: INFO: Pod pod-secrets-30c80824-d6bb-47a4-a995-9ce6647345ed no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:34:03.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2326" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":57,"skipped":927,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:34:03.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-43e196f5-1c69-44bc-9e55-775757618084 STEP: Creating a pod to test consume secrets Sep 21 10:34:03.457: INFO: Waiting up to 5m0s for pod "pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073" in namespace "secrets-7394" to be "Succeeded or Failed" Sep 21 10:34:03.522: INFO: Pod "pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073": Phase="Pending", Reason="", readiness=false. Elapsed: 65.364736ms Sep 21 10:34:05.530: INFO: Pod "pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073221976s Sep 21 10:34:07.537: INFO: Pod "pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073": Phase="Running", Reason="", readiness=true. Elapsed: 4.080140752s Sep 21 10:34:09.554: INFO: Pod "pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09751589s STEP: Saw pod success Sep 21 10:34:09.555: INFO: Pod "pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073" satisfied condition "Succeeded or Failed" Sep 21 10:34:09.561: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073 container secret-volume-test: STEP: delete the pod Sep 21 10:34:09.589: INFO: Waiting for pod pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073 to disappear Sep 21 10:34:09.687: INFO: Pod pod-secrets-23b1e94a-b626-48f3-8f4a-bb27e5145073 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:34:09.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7394" for this suite. • [SLOW TEST:6.358 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:34:09.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 21 10:34:09.862: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 21 10:34:09.870: INFO: starting watch STEP: patching STEP: updating Sep 21 10:34:09.886: INFO: waiting for watch events with expected annotations Sep 21 10:34:09.886: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:34:09.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1969" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":59,"skipped":970,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:34:10.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 21 10:34:14.701: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c6dd03fe-3130-45b2-983d-c47dca104c08" Sep 21 10:34:14.702: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c6dd03fe-3130-45b2-983d-c47dca104c08" in namespace "pods-9197" to be "terminated due to deadline exceeded" Sep 21 10:34:14.728: INFO: Pod "pod-update-activedeadlineseconds-c6dd03fe-3130-45b2-983d-c47dca104c08": Phase="Running", Reason="", readiness=true. Elapsed: 26.531342ms Sep 21 10:34:16.737: INFO: Pod "pod-update-activedeadlineseconds-c6dd03fe-3130-45b2-983d-c47dca104c08": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.034919318s Sep 21 10:34:16.737: INFO: Pod "pod-update-activedeadlineseconds-c6dd03fe-3130-45b2-983d-c47dca104c08" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:34:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9197" for this suite. • [SLOW TEST:6.762 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:34:16.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 10:34:28.109: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 10:34:30.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281268, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281268, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281268, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281268, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 10:34:33.221: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:34:33.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9315" for this suite. STEP: Destroying namespace "webhook-9315-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.659 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":61,"skipped":1015,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:34:33.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-df603d81-00da-4f20-b240-49cc04ead153 STEP: Creating a pod to test consume configMaps Sep 21 10:34:33.560: INFO: Waiting up to 5m0s for pod "pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84" in namespace "configmap-7855" to be "Succeeded or Failed" Sep 21 10:34:33.567: INFO: Pod "pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754815ms Sep 21 10:34:35.574: INFO: Pod "pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013843294s Sep 21 10:34:37.581: INFO: Pod "pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84": Phase="Running", Reason="", readiness=true. Elapsed: 4.021076805s Sep 21 10:34:39.588: INFO: Pod "pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028408242s STEP: Saw pod success Sep 21 10:34:39.589: INFO: Pod "pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84" satisfied condition "Succeeded or Failed" Sep 21 10:34:39.594: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84 container configmap-volume-test: STEP: delete the pod Sep 21 10:34:39.695: INFO: Waiting for pod pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84 to disappear Sep 21 10:34:39.701: INFO: Pod pod-configmaps-055b42ec-f2cd-4620-ad8e-b6d38c4bbd84 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:34:39.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7855" for this suite. • [SLOW TEST:6.284 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":1029,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:34:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:34:39.835: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 21 10:34:44.842: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 21 10:34:44.843: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 21 10:34:46.852: INFO: Creating deployment "test-rollover-deployment" Sep 21 10:34:46.909: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 21 10:34:48.929: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 21 10:34:48.941: INFO: Ensure that both replica sets have 1 created replica Sep 21 10:34:48.951: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 21 10:34:48.962: INFO: Updating deployment test-rollover-deployment Sep 21 10:34:48.962: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 21 10:34:51.000: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 21 10:34:51.010: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 21 10:34:51.020: INFO: all replica sets need to contain the pod-template-hash label Sep 21 10:34:51.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281289, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:34:53.036: INFO: all replica sets need to contain the pod-template-hash label Sep 21 10:34:53.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281292, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:34:55.039: INFO: all replica sets need to contain the pod-template-hash label Sep 21 10:34:55.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281292, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:34:57.037: INFO: all replica sets need to contain the pod-template-hash label Sep 21 10:34:57.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281292, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:34:59.036: INFO: all replica sets need to contain the pod-template-hash label Sep 21 10:34:59.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281292, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:35:01.035: INFO: all replica sets need to contain the pod-template-hash label Sep 21 10:35:01.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281292, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281286, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:35:03.033: INFO: Sep 21 10:35:03.033: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 21 10:35:03.049: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8181 /apis/apps/v1/namespaces/deployment-8181/deployments/test-rollover-deployment 50ee109f-c314-4c7c-bd43-10ea572d253d 2051488 2 2020-09-21 10:34:46 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-21 10:34:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 10:35:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa8bce08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-21 10:34:46 +0000 UTC,LastTransitionTime:2020-09-21 10:34:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-09-21 10:35:02 +0000 UTC,LastTransitionTime:2020-09-21 10:34:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 21 10:35:03.278: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-8181 /apis/apps/v1/namespaces/deployment-8181/replicasets/test-rollover-deployment-5797c7764 fdeed936-7709-453c-a62c-6d3c90950ed3 2051476 2 2020-09-21 10:34:48 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 50ee109f-c314-4c7c-bd43-10ea572d253d 0x89763c0 0x89763c1}] [] [{kube-controller-manager Update apps/v1 2020-09-21 10:35:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50ee109f-c314-4c7c-bd43-10ea572d253d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8976438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:35:03.278: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 21 10:35:03.279: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8181 /apis/apps/v1/namespaces/deployment-8181/replicasets/test-rollover-controller 0b2c4aab-79f3-4bbc-aade-1cd2e32beeb6 2051487 2 2020-09-21 10:34:39 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 50ee109f-c314-4c7c-bd43-10ea572d253d 0x89762b7 0x89762b8}] [] [{e2e.test Update apps/v1 2020-09-21 10:34:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 10:35:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50ee109f-c314-4c7c-bd43-10ea572d253d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x8976358 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:35:03.280: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-8181 /apis/apps/v1/namespaces/deployment-8181/replicasets/test-rollover-deployment-78bc8b888c b5e58d51-f6bd-4f80-8f1b-08d267b18bdd 2051425 2 2020-09-21 10:34:46 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 50ee109f-c314-4c7c-bd43-10ea572d253d 0x89764a7 0x89764a8}] [] [{kube-controller-manager Update apps/v1 2020-09-21 10:34:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50ee109f-c314-4c7c-bd43-10ea572d253d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8976538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 21 10:35:03.290: INFO: Pod "test-rollover-deployment-5797c7764-r7mpz" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-r7mpz test-rollover-deployment-5797c7764- deployment-8181 /api/v1/namespaces/deployment-8181/pods/test-rollover-deployment-5797c7764-r7mpz 1643f833-497c-45bc-9873-62e47263a43c 2051443 0 2020-09-21 10:34:49 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 fdeed936-7709-453c-a62c-6d3c90950ed3 0x8976aa0 0x8976aa1}] [] [{kube-controller-manager Update v1 2020-09-21 10:34:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdeed936-7709-453c-a62c-6d3c90950ed3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 10:34:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r98jv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r98jv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r98jv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:34:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:34:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:34:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 10:34:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.120,StartTime:2020-09-21 10:34:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 10:34:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://e13989d4f515853aee7877afc58a4ec69365ca0e842b71a486ad0b7ee55f271a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:35:03.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8181" for this suite. • [SLOW TEST:23.585 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":63,"skipped":1065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:35:03.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 21 10:35:03.748: INFO: starting watch STEP: patching STEP: updating Sep 21 10:35:03.794: INFO: waiting for watch events with expected annotations Sep 21 10:35:03.796: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:35:03.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-7463" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":64,"skipped":1112,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:35:03.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1699 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1699 I0921 10:35:04.052459 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1699, replica count: 2 I0921 10:35:07.104787 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:35:10.105758 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 10:35:10.106: INFO: Creating new exec pod Sep 21 10:35:15.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-1699 execpodl46bh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 21 10:35:16.681: INFO: stderr: "I0921 10:35:16.580076 1008 log.go:181] (0x250ef50) (0x250f3b0) Create stream\nI0921 10:35:16.584630 1008 log.go:181] (0x250ef50) (0x250f3b0) Stream added, broadcasting: 1\nI0921 10:35:16.603491 1008 log.go:181] (0x250ef50) Reply frame received for 1\nI0921 10:35:16.604096 1008 log.go:181] (0x250ef50) (0x2cea380) Create stream\nI0921 10:35:16.604212 1008 log.go:181] (0x250ef50) (0x2cea380) Stream added, broadcasting: 3\nI0921 10:35:16.605515 1008 log.go:181] (0x250ef50) Reply frame received for 3\nI0921 10:35:16.605750 1008 log.go:181] (0x250ef50) (0x2cea850) Create stream\nI0921 10:35:16.605826 1008 log.go:181] (0x250ef50) (0x2cea850) Stream added, broadcasting: 5\nI0921 10:35:16.606949 1008 log.go:181] (0x250ef50) Reply frame received for 5\nI0921 10:35:16.660647 1008 log.go:181] (0x250ef50) Data frame received for 5\nI0921 10:35:16.660962 1008 log.go:181] (0x2cea850) (5) Data frame handling\nI0921 10:35:16.661204 1008 log.go:181] (0x250ef50) Data frame received for 3\nI0921 10:35:16.661349 1008 log.go:181] (0x2cea380) (3) Data frame handling\nI0921 10:35:16.661454 1008 log.go:181] (0x2cea850) (5) Data frame sent\nI0921 10:35:16.661807 1008 log.go:181] (0x250ef50) Data frame received for 5\nI0921 10:35:16.661906 1008 log.go:181] (0x2cea850) (5) Data frame handling\nI0921 10:35:16.662335 1008 log.go:181] (0x250ef50) Data frame received for 1\nI0921 10:35:16.662490 1008 log.go:181] (0x250f3b0) (1) Data frame handling\nI0921 10:35:16.662638 1008 log.go:181] (0x250f3b0) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0921 10:35:16.665179 1008 log.go:181] (0x2cea850) (5) Data frame sent\nI0921 10:35:16.665420 1008 log.go:181] (0x250ef50) Data frame received for 5\nI0921 10:35:16.665738 1008 log.go:181] (0x250ef50) (0x250f3b0) Stream removed, broadcasting: 1\nI0921 10:35:16.666179 1008 log.go:181] (0x2cea850) (5) Data frame handling\nI0921 10:35:16.667375 1008 log.go:181] (0x250ef50) Go away received\nI0921 10:35:16.671324 1008 log.go:181] (0x250ef50) (0x250f3b0) Stream removed, broadcasting: 1\nI0921 10:35:16.671552 1008 log.go:181] (0x250ef50) (0x2cea380) Stream removed, broadcasting: 3\nI0921 10:35:16.671773 1008 log.go:181] (0x250ef50) (0x2cea850) Stream removed, broadcasting: 5\n" Sep 21 10:35:16.682: INFO: stdout: "" Sep 21 10:35:16.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-1699 execpodl46bh -- /bin/sh -x -c nc -zv -t -w 2 10.107.103.150 80' Sep 21 10:35:18.232: INFO: stderr: "I0921 10:35:18.094318 1028 log.go:181] (0x2ea8000) (0x2ea8070) Create stream\nI0921 10:35:18.097805 1028 log.go:181] (0x2ea8000) (0x2ea8070) Stream added, broadcasting: 1\nI0921 10:35:18.108939 1028 log.go:181] (0x2ea8000) Reply frame received for 1\nI0921 10:35:18.110012 1028 log.go:181] (0x2ea8000) (0x2ea8310) Create stream\nI0921 10:35:18.110141 1028 log.go:181] (0x2ea8000) (0x2ea8310) Stream added, broadcasting: 3\nI0921 10:35:18.112614 1028 log.go:181] (0x2ea8000) Reply frame received for 3\nI0921 10:35:18.113167 1028 log.go:181] (0x2ea8000) (0x29e40e0) Create stream\nI0921 10:35:18.113290 1028 log.go:181] (0x2ea8000) (0x29e40e0) Stream added, broadcasting: 5\nI0921 10:35:18.115291 1028 log.go:181] (0x2ea8000) Reply frame received for 5\nI0921 10:35:18.213704 1028 log.go:181] (0x2ea8000) Data frame received for 3\nI0921 10:35:18.214042 1028 log.go:181] (0x2ea8000) Data frame received for 5\nI0921 10:35:18.214383 1028 log.go:181] (0x2ea8310) (3) Data frame handling\nI0921 10:35:18.214667 1028 log.go:181] (0x29e40e0) (5) Data frame handling\nI0921 10:35:18.215308 1028 log.go:181] (0x29e40e0) (5) Data frame sent\nI0921 10:35:18.215459 1028 log.go:181] (0x2ea8000) Data frame received for 5\nI0921 10:35:18.215566 1028 log.go:181] (0x29e40e0) (5) Data frame handling\nI0921 10:35:18.215908 1028 log.go:181] (0x2ea8000) Data frame received for 1\n+ nc -zv -t -w 2 10.107.103.150 80\nConnection to 10.107.103.150 80 port [tcp/http] succeeded!\nI0921 10:35:18.216087 1028 log.go:181] (0x2ea8070) (1) Data frame handling\nI0921 10:35:18.216394 1028 log.go:181] (0x2ea8070) (1) Data frame sent\nI0921 10:35:18.218919 1028 log.go:181] (0x2ea8000) (0x2ea8070) Stream removed, broadcasting: 1\nI0921 10:35:18.220964 1028 log.go:181] (0x2ea8000) Go away received\nI0921 10:35:18.223174 1028 log.go:181] (0x2ea8000) (0x2ea8070) Stream removed, broadcasting: 1\nI0921 10:35:18.223512 1028 log.go:181] (0x2ea8000) (0x2ea8310) Stream removed, broadcasting: 3\nI0921 10:35:18.223660 1028 log.go:181] (0x2ea8000) (0x29e40e0) Stream removed, broadcasting: 5\n" Sep 21 10:35:18.233: INFO: stdout: "" Sep 21 10:35:18.233: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:35:18.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1699" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.398 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":65,"skipped":1115,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:35:18.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-a6d6bd84-a25a-41e3-869d-b30f057cc051 STEP: Creating a pod to test consume configMaps Sep 21 10:35:18.387: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d" in namespace "projected-8387" to be "Succeeded or Failed" Sep 21 10:35:18.410: INFO: Pod "pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.890052ms Sep 21 10:35:20.420: INFO: Pod "pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032391214s Sep 21 10:35:22.428: INFO: Pod "pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040786915s STEP: Saw pod success Sep 21 10:35:22.428: INFO: Pod "pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d" satisfied condition "Succeeded or Failed" Sep 21 10:35:22.433: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d container projected-configmap-volume-test: STEP: delete the pod Sep 21 10:35:22.503: INFO: Waiting for pod pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d to disappear Sep 21 10:35:22.513: INFO: Pod pod-projected-configmaps-78ff5e91-9ba7-4080-ba21-cc949db21f3d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:35:22.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8387" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":1127,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:35:22.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1008.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1008.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1008.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1008.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1008.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 75.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.75_udp@PTR;check="$$(dig +tcp +noall +answer +search 75.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.75_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1008.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1008.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1008.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1008.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1008.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1008.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 75.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.75_udp@PTR;check="$$(dig +tcp +noall +answer +search 75.125.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.125.75_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 10:35:30.730: INFO: Unable to read wheezy_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.734: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.738: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.743: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.772: INFO: Unable to read jessie_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.776: INFO: Unable to read jessie_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.780: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.784: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:30.808: INFO: Lookups using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 failed for: [wheezy_udp@dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_udp@dns-test-service.dns-1008.svc.cluster.local jessie_tcp@dns-test-service.dns-1008.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local] Sep 21 10:35:35.819: INFO: Unable to read wheezy_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.825: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.829: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.834: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.865: INFO: Unable to read jessie_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.872: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.875: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:35.901: INFO: Lookups using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 failed for: [wheezy_udp@dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_udp@dns-test-service.dns-1008.svc.cluster.local jessie_tcp@dns-test-service.dns-1008.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local] Sep 21 10:35:40.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.822: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.827: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.861: INFO: Unable to read jessie_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.864: INFO: Unable to read jessie_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.867: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:40.892: INFO: Lookups using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 failed for: [wheezy_udp@dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_udp@dns-test-service.dns-1008.svc.cluster.local jessie_tcp@dns-test-service.dns-1008.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local] Sep 21 10:35:45.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.830: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.860: INFO: Unable to read jessie_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.864: INFO: Unable to read jessie_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:45.895: INFO: Lookups using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 failed for: [wheezy_udp@dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_udp@dns-test-service.dns-1008.svc.cluster.local jessie_tcp@dns-test-service.dns-1008.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local] Sep 21 10:35:50.817: INFO: Unable to read wheezy_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.828: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.833: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.865: INFO: Unable to read jessie_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.869: INFO: Unable to read jessie_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.873: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.878: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:50.903: INFO: Lookups using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 failed for: [wheezy_udp@dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_udp@dns-test-service.dns-1008.svc.cluster.local jessie_tcp@dns-test-service.dns-1008.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local] Sep 21 10:35:55.817: INFO: Unable to read wheezy_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.822: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.828: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.856: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.886: INFO: Unable to read jessie_udp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.915: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local from pod dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788: the server could not find the requested resource (get pods dns-test-9c4033a6-af25-444b-9434-e29e78e74788) Sep 21 10:35:55.978: INFO: Lookups using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 failed for: [wheezy_udp@dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@dns-test-service.dns-1008.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_udp@dns-test-service.dns-1008.svc.cluster.local jessie_tcp@dns-test-service.dns-1008.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1008.svc.cluster.local] Sep 21 10:36:00.896: INFO: DNS probes using dns-1008/dns-test-9c4033a6-af25-444b-9434-e29e78e74788 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:36:01.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1008" for this suite. • [SLOW TEST:39.408 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":67,"skipped":1127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:36:01.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:36:02.019: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1638 I0921 10:36:02.093305 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1638, replica count: 1 I0921 10:36:03.145378 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:36:04.146010 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:36:05.146952 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 10:36:05.281: INFO: Created: latency-svc-hfrr8 Sep 21 10:36:05.316: INFO: Got endpoints: latency-svc-hfrr8 [67.034311ms] Sep 21 10:36:05.365: INFO: Created: latency-svc-x2zkg Sep 21 10:36:05.377: INFO: Got endpoints: latency-svc-x2zkg [59.301455ms] Sep 21 10:36:05.401: INFO: Created: latency-svc-j469p Sep 21 10:36:05.437: INFO: Got endpoints: latency-svc-j469p [119.206168ms] Sep 21 10:36:05.486: INFO: Created: latency-svc-m9mwg Sep 21 10:36:05.490: INFO: Got endpoints: latency-svc-m9mwg [172.808036ms] Sep 21 10:36:05.533: INFO: Created: latency-svc-8gvjv Sep 21 10:36:05.546: INFO: Got endpoints: latency-svc-8gvjv [228.724198ms] Sep 21 10:36:05.562: INFO: Created: latency-svc-zjr4s Sep 21 10:36:05.577: INFO: Got endpoints: latency-svc-zjr4s [259.358737ms] Sep 21 10:36:05.618: INFO: Created: latency-svc-pz9dd Sep 21 10:36:05.629: INFO: Got endpoints: latency-svc-pz9dd [311.870232ms] Sep 21 10:36:05.652: INFO: Created: latency-svc-t7bcd Sep 21 10:36:05.665: INFO: Got endpoints: latency-svc-t7bcd [347.604009ms] Sep 21 10:36:05.682: INFO: Created: latency-svc-nfm9d Sep 21 10:36:05.697: INFO: Got endpoints: latency-svc-nfm9d [379.655843ms] Sep 21 10:36:05.762: INFO: Created: latency-svc-qfhgk Sep 21 10:36:05.767: INFO: Got endpoints: latency-svc-qfhgk [449.788594ms] Sep 21 10:36:05.803: INFO: Created: latency-svc-fswkl Sep 21 10:36:05.854: INFO: Got endpoints: latency-svc-fswkl [536.420901ms] Sep 21 10:36:05.916: INFO: Created: latency-svc-l558k Sep 21 10:36:05.941: INFO: Got endpoints: latency-svc-l558k [622.347498ms] Sep 21 10:36:05.971: INFO: Created: latency-svc-h8lz2 Sep 21 10:36:05.998: INFO: Got endpoints: latency-svc-h8lz2 [680.451701ms] Sep 21 10:36:06.097: INFO: Created: latency-svc-kqs9s Sep 21 10:36:06.101: INFO: Got endpoints: latency-svc-kqs9s [782.316102ms] Sep 21 10:36:06.158: INFO: Created: latency-svc-7xv2p Sep 21 10:36:06.187: INFO: Got endpoints: latency-svc-7xv2p [869.70503ms] Sep 21 10:36:06.246: INFO: Created: latency-svc-fxvrh Sep 21 10:36:06.251: INFO: Got endpoints: latency-svc-fxvrh [150.200349ms] Sep 21 10:36:06.308: INFO: Created: latency-svc-2qqm2 Sep 21 10:36:06.319: INFO: Got endpoints: latency-svc-2qqm2 [1.001834733s] Sep 21 10:36:06.339: INFO: Created: latency-svc-v4xgt Sep 21 10:36:06.404: INFO: Got endpoints: latency-svc-v4xgt [1.026596101s] Sep 21 10:36:06.404: INFO: Created: latency-svc-6mkx8 Sep 21 10:36:06.433: INFO: Got endpoints: latency-svc-6mkx8 [995.511457ms] Sep 21 10:36:06.463: INFO: Created: latency-svc-fzk75 Sep 21 10:36:06.476: INFO: Got endpoints: latency-svc-fzk75 [986.448297ms] Sep 21 10:36:06.557: INFO: Created: latency-svc-g8fh4 Sep 21 10:36:06.566: INFO: Got endpoints: latency-svc-g8fh4 [1.020492589s] Sep 21 10:36:06.631: INFO: Created: latency-svc-vls4x Sep 21 10:36:06.695: INFO: Got endpoints: latency-svc-vls4x [1.117972243s] Sep 21 10:36:06.702: INFO: Created: latency-svc-58zmw Sep 21 10:36:06.717: INFO: Got endpoints: latency-svc-58zmw [1.087871571s] Sep 21 10:36:06.751: INFO: Created: latency-svc-hl6t8 Sep 21 10:36:06.776: INFO: Got endpoints: latency-svc-hl6t8 [1.11109228s] Sep 21 10:36:06.833: INFO: Created: latency-svc-xpx9z Sep 21 10:36:06.853: INFO: Got endpoints: latency-svc-xpx9z [1.156479204s] Sep 21 10:36:06.883: INFO: Created: latency-svc-4gfr2 Sep 21 10:36:06.900: INFO: Got endpoints: latency-svc-4gfr2 [1.132639689s] Sep 21 10:36:06.919: INFO: Created: latency-svc-4hmkc Sep 21 10:36:06.987: INFO: Got endpoints: latency-svc-4hmkc [1.13281695s] Sep 21 10:36:06.996: INFO: Created: latency-svc-xkcx2 Sep 21 10:36:07.012: INFO: Got endpoints: latency-svc-xkcx2 [1.070956537s] Sep 21 10:36:07.076: INFO: Created: latency-svc-zcv5w Sep 21 10:36:07.115: INFO: Got endpoints: latency-svc-zcv5w [1.117265932s] Sep 21 10:36:07.129: INFO: Created: latency-svc-hhrfn Sep 21 10:36:07.146: INFO: Got endpoints: latency-svc-hhrfn [959.385918ms] Sep 21 10:36:07.164: INFO: Created: latency-svc-z72fd Sep 21 10:36:07.175: INFO: Got endpoints: latency-svc-z72fd [923.746273ms] Sep 21 10:36:07.214: INFO: Created: latency-svc-q447r Sep 21 10:36:07.308: INFO: Got endpoints: latency-svc-q447r [989.32437ms] Sep 21 10:36:07.310: INFO: Created: latency-svc-fdd9m Sep 21 10:36:07.319: INFO: Got endpoints: latency-svc-fdd9m [914.709013ms] Sep 21 10:36:07.338: INFO: Created: latency-svc-2s4kv Sep 21 10:36:07.351: INFO: Got endpoints: latency-svc-2s4kv [918.388297ms] Sep 21 10:36:07.368: INFO: Created: latency-svc-s2chn Sep 21 10:36:07.381: INFO: Got endpoints: latency-svc-s2chn [904.313533ms] Sep 21 10:36:07.405: INFO: Created: latency-svc-zbqtv Sep 21 10:36:07.449: INFO: Got endpoints: latency-svc-zbqtv [882.484393ms] Sep 21 10:36:07.458: INFO: Created: latency-svc-jzgfk Sep 21 10:36:07.484: INFO: Got endpoints: latency-svc-jzgfk [789.334557ms] Sep 21 10:36:07.513: INFO: Created: latency-svc-vw7fv Sep 21 10:36:07.529: INFO: Got endpoints: latency-svc-vw7fv [811.500973ms] Sep 21 10:36:07.548: INFO: Created: latency-svc-4nlzw Sep 21 10:36:07.588: INFO: Got endpoints: latency-svc-4nlzw [811.562522ms] Sep 21 10:36:07.614: INFO: Created: latency-svc-xz7jz Sep 21 10:36:07.629: INFO: Got endpoints: latency-svc-xz7jz [775.674538ms] Sep 21 10:36:07.761: INFO: Created: latency-svc-gzmjn Sep 21 10:36:07.775: INFO: Got endpoints: latency-svc-gzmjn [874.63433ms] Sep 21 10:36:07.795: INFO: Created: latency-svc-c9k64 Sep 21 10:36:07.823: INFO: Got endpoints: latency-svc-c9k64 [835.300714ms] Sep 21 10:36:07.917: INFO: Created: latency-svc-vs4gq Sep 21 10:36:07.944: INFO: Got endpoints: latency-svc-vs4gq [932.350393ms] Sep 21 10:36:07.946: INFO: Created: latency-svc-87h2g Sep 21 10:36:07.993: INFO: Got endpoints: latency-svc-87h2g [877.85791ms] Sep 21 10:36:08.073: INFO: Created: latency-svc-d6n67 Sep 21 10:36:08.081: INFO: Got endpoints: latency-svc-d6n67 [935.048774ms] Sep 21 10:36:08.112: INFO: Created: latency-svc-d2stk Sep 21 10:36:08.145: INFO: Got endpoints: latency-svc-d2stk [969.771066ms] Sep 21 10:36:08.254: INFO: Created: latency-svc-55blg Sep 21 10:36:08.262: INFO: Got endpoints: latency-svc-55blg [953.279803ms] Sep 21 10:36:08.286: INFO: Created: latency-svc-r6d6s Sep 21 10:36:08.299: INFO: Got endpoints: latency-svc-r6d6s [979.737499ms] Sep 21 10:36:08.347: INFO: Created: latency-svc-2pjkj Sep 21 10:36:08.422: INFO: Got endpoints: latency-svc-2pjkj [1.070686501s] Sep 21 10:36:08.422: INFO: Created: latency-svc-xj5v8 Sep 21 10:36:08.430: INFO: Got endpoints: latency-svc-xj5v8 [1.049094188s] Sep 21 10:36:08.466: INFO: Created: latency-svc-7p44d Sep 21 10:36:08.491: INFO: Got endpoints: latency-svc-7p44d [1.04167248s] Sep 21 10:36:08.564: INFO: Created: latency-svc-6zclb Sep 21 10:36:08.587: INFO: Got endpoints: latency-svc-6zclb [1.102435774s] Sep 21 10:36:08.653: INFO: Created: latency-svc-hlbfz Sep 21 10:36:08.708: INFO: Got endpoints: latency-svc-hlbfz [1.178528966s] Sep 21 10:36:08.749: INFO: Created: latency-svc-zm2bl Sep 21 10:36:08.763: INFO: Got endpoints: latency-svc-zm2bl [1.174236454s] Sep 21 10:36:08.846: INFO: Created: latency-svc-c97w8 Sep 21 10:36:08.870: INFO: Got endpoints: latency-svc-c97w8 [1.240445304s] Sep 21 10:36:08.904: INFO: Created: latency-svc-l4kz8 Sep 21 10:36:08.928: INFO: Got endpoints: latency-svc-l4kz8 [1.152534143s] Sep 21 10:36:08.983: INFO: Created: latency-svc-t7jrd Sep 21 10:36:08.997: INFO: Got endpoints: latency-svc-t7jrd [1.174624248s] Sep 21 10:36:09.018: INFO: Created: latency-svc-8bl4r Sep 21 10:36:09.033: INFO: Got endpoints: latency-svc-8bl4r [1.088666642s] Sep 21 10:36:09.072: INFO: Created: latency-svc-sj95f Sep 21 10:36:09.175: INFO: Got endpoints: latency-svc-sj95f [1.181869742s] Sep 21 10:36:09.211: INFO: Created: latency-svc-5cxw6 Sep 21 10:36:09.226: INFO: Got endpoints: latency-svc-5cxw6 [1.143956353s] Sep 21 10:36:09.253: INFO: Created: latency-svc-hzw5l Sep 21 10:36:09.331: INFO: Got endpoints: latency-svc-hzw5l [1.186192802s] Sep 21 10:36:09.359: INFO: Created: latency-svc-pnpmj Sep 21 10:36:09.379: INFO: Got endpoints: latency-svc-pnpmj [1.116884772s] Sep 21 10:36:09.426: INFO: Created: latency-svc-8wg2f Sep 21 10:36:09.461: INFO: Got endpoints: latency-svc-8wg2f [1.16205023s] Sep 21 10:36:09.485: INFO: Created: latency-svc-xcxzv Sep 21 10:36:09.511: INFO: Got endpoints: latency-svc-xcxzv [1.088184473s] Sep 21 10:36:09.546: INFO: Created: latency-svc-99jw7 Sep 21 10:36:09.561: INFO: Got endpoints: latency-svc-99jw7 [1.13063702s] Sep 21 10:36:09.606: INFO: Created: latency-svc-zvfvj Sep 21 10:36:09.610: INFO: Got endpoints: latency-svc-zvfvj [1.118504613s] Sep 21 10:36:09.641: INFO: Created: latency-svc-4l9lw Sep 21 10:36:09.670: INFO: Got endpoints: latency-svc-4l9lw [1.082199154s] Sep 21 10:36:09.691: INFO: Created: latency-svc-qzgkh Sep 21 10:36:09.736: INFO: Got endpoints: latency-svc-qzgkh [1.02838415s] Sep 21 10:36:09.750: INFO: Created: latency-svc-g8mv9 Sep 21 10:36:09.781: INFO: Got endpoints: latency-svc-g8mv9 [1.017623822s] Sep 21 10:36:09.822: INFO: Created: latency-svc-zzx7s Sep 21 10:36:09.919: INFO: Got endpoints: latency-svc-zzx7s [1.048970975s] Sep 21 10:36:09.921: INFO: Created: latency-svc-4cgnh Sep 21 10:36:09.933: INFO: Got endpoints: latency-svc-4cgnh [1.005155119s] Sep 21 10:36:09.984: INFO: Created: latency-svc-bwq57 Sep 21 10:36:10.001: INFO: Got endpoints: latency-svc-bwq57 [1.003099735s] Sep 21 10:36:10.087: INFO: Created: latency-svc-4zbzc Sep 21 10:36:10.096: INFO: Got endpoints: latency-svc-4zbzc [1.062871559s] Sep 21 10:36:10.128: INFO: Created: latency-svc-775lz Sep 21 10:36:10.163: INFO: Got endpoints: latency-svc-775lz [987.580283ms] Sep 21 10:36:10.228: INFO: Created: latency-svc-92299 Sep 21 10:36:10.241: INFO: Got endpoints: latency-svc-92299 [1.014684082s] Sep 21 10:36:10.278: INFO: Created: latency-svc-9v22l Sep 21 10:36:10.295: INFO: Got endpoints: latency-svc-9v22l [962.886506ms] Sep 21 10:36:10.313: INFO: Created: latency-svc-lxtkb Sep 21 10:36:10.411: INFO: Got endpoints: latency-svc-lxtkb [1.031317602s] Sep 21 10:36:10.412: INFO: Created: latency-svc-fcjgj Sep 21 10:36:10.434: INFO: Got endpoints: latency-svc-fcjgj [972.995377ms] Sep 21 10:36:10.470: INFO: Created: latency-svc-t22qw Sep 21 10:36:10.482: INFO: Got endpoints: latency-svc-t22qw [971.136147ms] Sep 21 10:36:10.501: INFO: Created: latency-svc-qxr6n Sep 21 10:36:10.560: INFO: Got endpoints: latency-svc-qxr6n [998.352151ms] Sep 21 10:36:10.561: INFO: Created: latency-svc-5wzsc Sep 21 10:36:10.567: INFO: Got endpoints: latency-svc-5wzsc [956.528699ms] Sep 21 10:36:10.617: INFO: Created: latency-svc-7dj7v Sep 21 10:36:10.627: INFO: Got endpoints: latency-svc-7dj7v [956.736834ms] Sep 21 10:36:10.707: INFO: Created: latency-svc-8qttm Sep 21 10:36:10.718: INFO: Got endpoints: latency-svc-8qttm [981.376861ms] Sep 21 10:36:10.740: INFO: Created: latency-svc-dvw49 Sep 21 10:36:10.754: INFO: Got endpoints: latency-svc-dvw49 [973.453685ms] Sep 21 10:36:10.788: INFO: Created: latency-svc-fkbjr Sep 21 10:36:10.802: INFO: Got endpoints: latency-svc-fkbjr [882.177761ms] Sep 21 10:36:10.851: INFO: Created: latency-svc-sqflc Sep 21 10:36:10.857: INFO: Got endpoints: latency-svc-sqflc [923.643824ms] Sep 21 10:36:10.902: INFO: Created: latency-svc-ssq74 Sep 21 10:36:10.912: INFO: Got endpoints: latency-svc-ssq74 [911.095127ms] Sep 21 10:36:10.932: INFO: Created: latency-svc-s7952 Sep 21 10:36:10.943: INFO: Got endpoints: latency-svc-s7952 [846.526445ms] Sep 21 10:36:11.001: INFO: Created: latency-svc-s27z8 Sep 21 10:36:11.035: INFO: Got endpoints: latency-svc-s27z8 [871.42365ms] Sep 21 10:36:11.063: INFO: Created: latency-svc-69kbl Sep 21 10:36:11.081: INFO: Got endpoints: latency-svc-69kbl [840.605963ms] Sep 21 10:36:11.158: INFO: Created: latency-svc-g6gjg Sep 21 10:36:11.165: INFO: Got endpoints: latency-svc-g6gjg [869.720955ms] Sep 21 10:36:11.183: INFO: Created: latency-svc-49v8g Sep 21 10:36:11.196: INFO: Got endpoints: latency-svc-49v8g [784.722388ms] Sep 21 10:36:11.214: INFO: Created: latency-svc-sktrd Sep 21 10:36:11.233: INFO: Got endpoints: latency-svc-sktrd [798.73549ms] Sep 21 10:36:11.308: INFO: Created: latency-svc-mrj7f Sep 21 10:36:11.340: INFO: Created: latency-svc-j89xf Sep 21 10:36:11.341: INFO: Got endpoints: latency-svc-mrj7f [859.196437ms] Sep 21 10:36:11.370: INFO: Got endpoints: latency-svc-j89xf [809.851937ms] Sep 21 10:36:11.400: INFO: Created: latency-svc-wcmh4 Sep 21 10:36:11.444: INFO: Got endpoints: latency-svc-wcmh4 [876.949279ms] Sep 21 10:36:11.458: INFO: Created: latency-svc-59mkv Sep 21 10:36:11.474: INFO: Got endpoints: latency-svc-59mkv [846.832206ms] Sep 21 10:36:11.490: INFO: Created: latency-svc-vn5ks Sep 21 10:36:11.504: INFO: Got endpoints: latency-svc-vn5ks [785.379262ms] Sep 21 10:36:11.526: INFO: Created: latency-svc-vj9w7 Sep 21 10:36:11.541: INFO: Got endpoints: latency-svc-vj9w7 [786.046446ms] Sep 21 10:36:11.582: INFO: Created: latency-svc-mvd9j Sep 21 10:36:11.589: INFO: Got endpoints: latency-svc-mvd9j [787.178288ms] Sep 21 10:36:11.622: INFO: Created: latency-svc-ghvm2 Sep 21 10:36:11.658: INFO: Got endpoints: latency-svc-ghvm2 [801.107605ms] Sep 21 10:36:11.731: INFO: Created: latency-svc-mv6mg Sep 21 10:36:11.738: INFO: Got endpoints: latency-svc-mv6mg [826.037542ms] Sep 21 10:36:11.759: INFO: Created: latency-svc-pjdvd Sep 21 10:36:11.775: INFO: Got endpoints: latency-svc-pjdvd [831.544571ms] Sep 21 10:36:11.809: INFO: Created: latency-svc-kjmmc Sep 21 10:36:11.827: INFO: Got endpoints: latency-svc-kjmmc [791.65548ms] Sep 21 10:36:11.867: INFO: Created: latency-svc-n6kgz Sep 21 10:36:11.884: INFO: Got endpoints: latency-svc-n6kgz [801.873667ms] Sep 21 10:36:11.916: INFO: Created: latency-svc-fvj4g Sep 21 10:36:11.953: INFO: Got endpoints: latency-svc-fvj4g [787.442263ms] Sep 21 10:36:12.019: INFO: Created: latency-svc-dgssk Sep 21 10:36:12.023: INFO: Got endpoints: latency-svc-dgssk [826.591933ms] Sep 21 10:36:12.048: INFO: Created: latency-svc-6q2km Sep 21 10:36:12.058: INFO: Got endpoints: latency-svc-6q2km [824.855534ms] Sep 21 10:36:12.089: INFO: Created: latency-svc-z7vxs Sep 21 10:36:12.101: INFO: Got endpoints: latency-svc-z7vxs [759.809225ms] Sep 21 10:36:12.163: INFO: Created: latency-svc-frpgn Sep 21 10:36:12.166: INFO: Got endpoints: latency-svc-frpgn [796.009691ms] Sep 21 10:36:12.191: INFO: Created: latency-svc-zmzlt Sep 21 10:36:12.204: INFO: Got endpoints: latency-svc-zmzlt [759.798764ms] Sep 21 10:36:12.234: INFO: Created: latency-svc-h9dw2 Sep 21 10:36:12.247: INFO: Got endpoints: latency-svc-h9dw2 [772.780646ms] Sep 21 10:36:12.312: INFO: Created: latency-svc-nh2r7 Sep 21 10:36:12.317: INFO: Got endpoints: latency-svc-nh2r7 [812.904544ms] Sep 21 10:36:12.360: INFO: Created: latency-svc-7kcqg Sep 21 10:36:12.377: INFO: Got endpoints: latency-svc-7kcqg [835.996131ms] Sep 21 10:36:13.099: INFO: Created: latency-svc-7mgf5 Sep 21 10:36:13.102: INFO: Got endpoints: latency-svc-7mgf5 [1.512284607s] Sep 21 10:36:14.043: INFO: Created: latency-svc-cwm4z Sep 21 10:36:14.047: INFO: Got endpoints: latency-svc-cwm4z [2.388535119s] Sep 21 10:36:14.103: INFO: Created: latency-svc-tgw6j Sep 21 10:36:14.133: INFO: Got endpoints: latency-svc-tgw6j [2.394441074s] Sep 21 10:36:14.235: INFO: Created: latency-svc-c6g96 Sep 21 10:36:14.237: INFO: Got endpoints: latency-svc-c6g96 [2.461425299s] Sep 21 10:36:14.258: INFO: Created: latency-svc-22khf Sep 21 10:36:14.271: INFO: Got endpoints: latency-svc-22khf [2.444187008s] Sep 21 10:36:14.288: INFO: Created: latency-svc-8vqfn Sep 21 10:36:14.302: INFO: Got endpoints: latency-svc-8vqfn [2.418381538s] Sep 21 10:36:14.330: INFO: Created: latency-svc-jdpnd Sep 21 10:36:14.378: INFO: Got endpoints: latency-svc-jdpnd [2.424874494s] Sep 21 10:36:14.390: INFO: Created: latency-svc-s8kqk Sep 21 10:36:14.426: INFO: Got endpoints: latency-svc-s8kqk [2.403251663s] Sep 21 10:36:14.469: INFO: Created: latency-svc-vm5rm Sep 21 10:36:14.527: INFO: Got endpoints: latency-svc-vm5rm [2.468930523s] Sep 21 10:36:14.531: INFO: Created: latency-svc-7v6rz Sep 21 10:36:14.536: INFO: Got endpoints: latency-svc-7v6rz [2.434341556s] Sep 21 10:36:14.565: INFO: Created: latency-svc-dwl7j Sep 21 10:36:14.600: INFO: Got endpoints: latency-svc-dwl7j [2.433817405s] Sep 21 10:36:15.253: INFO: Created: latency-svc-cg7fv Sep 21 10:36:15.291: INFO: Created: latency-svc-688h5 Sep 21 10:36:15.292: INFO: Got endpoints: latency-svc-cg7fv [3.088412928s] Sep 21 10:36:15.322: INFO: Got endpoints: latency-svc-688h5 [3.07477766s] Sep 21 10:36:15.883: INFO: Created: latency-svc-jmvv2 Sep 21 10:36:15.919: INFO: Got endpoints: latency-svc-jmvv2 [3.602352877s] Sep 21 10:36:15.949: INFO: Created: latency-svc-mxggz Sep 21 10:36:16.031: INFO: Got endpoints: latency-svc-mxggz [3.653864444s] Sep 21 10:36:16.033: INFO: Created: latency-svc-nmr4l Sep 21 10:36:16.047: INFO: Got endpoints: latency-svc-nmr4l [2.945403168s] Sep 21 10:36:16.093: INFO: Created: latency-svc-qjtgp Sep 21 10:36:16.128: INFO: Got endpoints: latency-svc-qjtgp [2.080705709s] Sep 21 10:36:16.178: INFO: Created: latency-svc-9fl6j Sep 21 10:36:16.193: INFO: Got endpoints: latency-svc-9fl6j [2.059911077s] Sep 21 10:36:16.214: INFO: Created: latency-svc-6nds2 Sep 21 10:36:16.244: INFO: Got endpoints: latency-svc-6nds2 [2.00670789s] Sep 21 10:36:16.313: INFO: Created: latency-svc-szphq Sep 21 10:36:16.327: INFO: Got endpoints: latency-svc-szphq [2.054640345s] Sep 21 10:36:16.369: INFO: Created: latency-svc-g5p5j Sep 21 10:36:16.493: INFO: Got endpoints: latency-svc-g5p5j [2.190122909s] Sep 21 10:36:16.512: INFO: Created: latency-svc-q7grm Sep 21 10:36:16.545: INFO: Got endpoints: latency-svc-q7grm [2.166917103s] Sep 21 10:36:16.648: INFO: Created: latency-svc-wr5rc Sep 21 10:36:16.694: INFO: Created: latency-svc-zfxn7 Sep 21 10:36:16.694: INFO: Got endpoints: latency-svc-wr5rc [2.267439311s] Sep 21 10:36:16.716: INFO: Got endpoints: latency-svc-zfxn7 [2.18865952s] Sep 21 10:36:16.797: INFO: Created: latency-svc-zsb26 Sep 21 10:36:16.802: INFO: Got endpoints: latency-svc-zsb26 [2.26550622s] Sep 21 10:36:16.849: INFO: Created: latency-svc-xpnds Sep 21 10:36:16.863: INFO: Got endpoints: latency-svc-xpnds [2.262724649s] Sep 21 10:36:16.878: INFO: Created: latency-svc-m6869 Sep 21 10:36:16.961: INFO: Created: latency-svc-rjbpf Sep 21 10:36:16.961: INFO: Got endpoints: latency-svc-m6869 [1.668104364s] Sep 21 10:36:16.970: INFO: Got endpoints: latency-svc-rjbpf [1.64786525s] Sep 21 10:36:16.999: INFO: Created: latency-svc-h9w2h Sep 21 10:36:17.014: INFO: Got endpoints: latency-svc-h9w2h [1.094103051s] Sep 21 10:36:17.029: INFO: Created: latency-svc-rbfh7 Sep 21 10:36:17.045: INFO: Got endpoints: latency-svc-rbfh7 [1.01357695s] Sep 21 10:36:17.108: INFO: Created: latency-svc-dl4sk Sep 21 10:36:17.111: INFO: Got endpoints: latency-svc-dl4sk [1.063394428s] Sep 21 10:36:17.166: INFO: Created: latency-svc-jmmjf Sep 21 10:36:17.204: INFO: Got endpoints: latency-svc-jmmjf [1.07557263s] Sep 21 10:36:17.251: INFO: Created: latency-svc-xlw8l Sep 21 10:36:17.267: INFO: Got endpoints: latency-svc-xlw8l [1.073063645s] Sep 21 10:36:17.287: INFO: Created: latency-svc-v52g4 Sep 21 10:36:17.296: INFO: Got endpoints: latency-svc-v52g4 [1.052148243s] Sep 21 10:36:17.329: INFO: Created: latency-svc-zs9bx Sep 21 10:36:17.345: INFO: Got endpoints: latency-svc-zs9bx [1.018500674s] Sep 21 10:36:17.402: INFO: Created: latency-svc-scbrx Sep 21 10:36:17.403: INFO: Got endpoints: latency-svc-scbrx [910.408994ms] Sep 21 10:36:17.430: INFO: Created: latency-svc-cg8gk Sep 21 10:36:17.460: INFO: Got endpoints: latency-svc-cg8gk [914.941563ms] Sep 21 10:36:17.490: INFO: Created: latency-svc-fm9vx Sep 21 10:36:17.528: INFO: Got endpoints: latency-svc-fm9vx [833.73467ms] Sep 21 10:36:17.538: INFO: Created: latency-svc-wb5rs Sep 21 10:36:17.556: INFO: Got endpoints: latency-svc-wb5rs [839.39803ms] Sep 21 10:36:17.575: INFO: Created: latency-svc-rkm62 Sep 21 10:36:17.586: INFO: Got endpoints: latency-svc-rkm62 [784.130273ms] Sep 21 10:36:17.604: INFO: Created: latency-svc-xx6hm Sep 21 10:36:17.617: INFO: Got endpoints: latency-svc-xx6hm [753.682789ms] Sep 21 10:36:17.671: INFO: Created: latency-svc-whmb7 Sep 21 10:36:17.683: INFO: Got endpoints: latency-svc-whmb7 [722.170276ms] Sep 21 10:36:17.707: INFO: Created: latency-svc-pwk6k Sep 21 10:36:17.721: INFO: Got endpoints: latency-svc-pwk6k [750.701325ms] Sep 21 10:36:17.803: INFO: Created: latency-svc-ngbgh Sep 21 10:36:17.827: INFO: Got endpoints: latency-svc-ngbgh [812.715268ms] Sep 21 10:36:17.829: INFO: Created: latency-svc-gsbcl Sep 21 10:36:17.840: INFO: Got endpoints: latency-svc-gsbcl [795.399966ms] Sep 21 10:36:17.856: INFO: Created: latency-svc-9cg2b Sep 21 10:36:17.871: INFO: Got endpoints: latency-svc-9cg2b [759.738742ms] Sep 21 10:36:17.886: INFO: Created: latency-svc-l9gw9 Sep 21 10:36:17.960: INFO: Got endpoints: latency-svc-l9gw9 [755.998656ms] Sep 21 10:36:17.961: INFO: Created: latency-svc-68kmg Sep 21 10:36:17.970: INFO: Got endpoints: latency-svc-68kmg [702.937128ms] Sep 21 10:36:17.990: INFO: Created: latency-svc-sjghm Sep 21 10:36:18.001: INFO: Got endpoints: latency-svc-sjghm [705.023427ms] Sep 21 10:36:18.019: INFO: Created: latency-svc-n2rld Sep 21 10:36:18.032: INFO: Got endpoints: latency-svc-n2rld [686.009903ms] Sep 21 10:36:18.054: INFO: Created: latency-svc-8szhp Sep 21 10:36:18.091: INFO: Got endpoints: latency-svc-8szhp [686.997677ms] Sep 21 10:36:18.115: INFO: Created: latency-svc-4jztz Sep 21 10:36:18.187: INFO: Got endpoints: latency-svc-4jztz [726.357633ms] Sep 21 10:36:18.235: INFO: Created: latency-svc-sbhz6 Sep 21 10:36:18.248: INFO: Got endpoints: latency-svc-sbhz6 [719.670779ms] Sep 21 10:36:18.271: INFO: Created: latency-svc-kkj89 Sep 21 10:36:18.304: INFO: Got endpoints: latency-svc-kkj89 [748.165257ms] Sep 21 10:36:18.354: INFO: Created: latency-svc-swljd Sep 21 10:36:18.359: INFO: Got endpoints: latency-svc-swljd [772.420982ms] Sep 21 10:36:18.438: INFO: Created: latency-svc-ttz6m Sep 21 10:36:18.454: INFO: Got endpoints: latency-svc-ttz6m [836.394173ms] Sep 21 10:36:18.498: INFO: Created: latency-svc-w6x5x Sep 21 10:36:18.501: INFO: Got endpoints: latency-svc-w6x5x [818.083572ms] Sep 21 10:36:18.553: INFO: Created: latency-svc-pz8sb Sep 21 10:36:18.562: INFO: Got endpoints: latency-svc-pz8sb [841.193276ms] Sep 21 10:36:18.585: INFO: Created: latency-svc-6rrh9 Sep 21 10:36:18.642: INFO: Got endpoints: latency-svc-6rrh9 [814.80401ms] Sep 21 10:36:18.678: INFO: Created: latency-svc-92n6c Sep 21 10:36:18.707: INFO: Got endpoints: latency-svc-92n6c [866.575198ms] Sep 21 10:36:18.786: INFO: Created: latency-svc-njdlf Sep 21 10:36:18.790: INFO: Got endpoints: latency-svc-njdlf [918.959184ms] Sep 21 10:36:18.816: INFO: Created: latency-svc-g8qls Sep 21 10:36:18.833: INFO: Got endpoints: latency-svc-g8qls [872.91767ms] Sep 21 10:36:18.853: INFO: Created: latency-svc-2ntzb Sep 21 10:36:18.869: INFO: Got endpoints: latency-svc-2ntzb [898.976594ms] Sep 21 10:36:18.936: INFO: Created: latency-svc-8rkvn Sep 21 10:36:18.940: INFO: Got endpoints: latency-svc-8rkvn [938.426436ms] Sep 21 10:36:18.985: INFO: Created: latency-svc-2d5t7 Sep 21 10:36:18.996: INFO: Got endpoints: latency-svc-2d5t7 [963.528658ms] Sep 21 10:36:19.014: INFO: Created: latency-svc-l69hq Sep 21 10:36:19.026: INFO: Got endpoints: latency-svc-l69hq [934.937125ms] Sep 21 10:36:19.072: INFO: Created: latency-svc-ddpnj Sep 21 10:36:19.077: INFO: Got endpoints: latency-svc-ddpnj [890.313223ms] Sep 21 10:36:19.099: INFO: Created: latency-svc-kvbql Sep 21 10:36:19.124: INFO: Got endpoints: latency-svc-kvbql [875.237402ms] Sep 21 10:36:19.147: INFO: Created: latency-svc-t55kr Sep 21 10:36:19.158: INFO: Got endpoints: latency-svc-t55kr [853.290364ms] Sep 21 10:36:19.210: INFO: Created: latency-svc-n2pjn Sep 21 10:36:19.215: INFO: Got endpoints: latency-svc-n2pjn [855.642833ms] Sep 21 10:36:19.242: INFO: Created: latency-svc-flg4c Sep 21 10:36:19.255: INFO: Got endpoints: latency-svc-flg4c [801.51392ms] Sep 21 10:36:19.272: INFO: Created: latency-svc-wcvcg Sep 21 10:36:19.289: INFO: Got endpoints: latency-svc-wcvcg [787.143937ms] Sep 21 10:36:19.353: INFO: Created: latency-svc-ctbdt Sep 21 10:36:19.381: INFO: Got endpoints: latency-svc-ctbdt [818.470429ms] Sep 21 10:36:19.409: INFO: Created: latency-svc-wcc6c Sep 21 10:36:19.426: INFO: Got endpoints: latency-svc-wcc6c [783.6669ms] Sep 21 10:36:19.446: INFO: Created: latency-svc-4bwpm Sep 21 10:36:19.480: INFO: Got endpoints: latency-svc-4bwpm [772.747686ms] Sep 21 10:36:19.488: INFO: Created: latency-svc-4n85h Sep 21 10:36:19.505: INFO: Got endpoints: latency-svc-4n85h [715.107841ms] Sep 21 10:36:19.525: INFO: Created: latency-svc-bb67t Sep 21 10:36:19.535: INFO: Got endpoints: latency-svc-bb67t [701.68394ms] Sep 21 10:36:19.555: INFO: Created: latency-svc-rsnx7 Sep 21 10:36:19.573: INFO: Got endpoints: latency-svc-rsnx7 [703.542963ms] Sep 21 10:36:19.630: INFO: Created: latency-svc-tzfl5 Sep 21 10:36:19.638: INFO: Got endpoints: latency-svc-tzfl5 [697.717604ms] Sep 21 10:36:19.656: INFO: Created: latency-svc-mll4n Sep 21 10:36:19.668: INFO: Got endpoints: latency-svc-mll4n [671.9516ms] Sep 21 10:36:19.698: INFO: Created: latency-svc-mjb86 Sep 21 10:36:19.711: INFO: Got endpoints: latency-svc-mjb86 [685.465503ms] Sep 21 10:36:19.767: INFO: Created: latency-svc-zzzqz Sep 21 10:36:19.771: INFO: Got endpoints: latency-svc-zzzqz [693.544497ms] Sep 21 10:36:19.794: INFO: Created: latency-svc-ckzgd Sep 21 10:36:19.820: INFO: Got endpoints: latency-svc-ckzgd [695.630558ms] Sep 21 10:36:19.848: INFO: Created: latency-svc-kkrhh Sep 21 10:36:19.861: INFO: Got endpoints: latency-svc-kkrhh [703.023722ms] Sep 21 10:36:19.911: INFO: Created: latency-svc-svct8 Sep 21 10:36:19.932: INFO: Got endpoints: latency-svc-svct8 [717.394369ms] Sep 21 10:36:19.933: INFO: Created: latency-svc-wctbc Sep 21 10:36:19.946: INFO: Got endpoints: latency-svc-wctbc [689.995676ms] Sep 21 10:36:19.962: INFO: Created: latency-svc-lbhsm Sep 21 10:36:19.976: INFO: Got endpoints: latency-svc-lbhsm [686.936225ms] Sep 21 10:36:19.978: INFO: Latencies: [59.301455ms 119.206168ms 150.200349ms 172.808036ms 228.724198ms 259.358737ms 311.870232ms 347.604009ms 379.655843ms 449.788594ms 536.420901ms 622.347498ms 671.9516ms 680.451701ms 685.465503ms 686.009903ms 686.936225ms 686.997677ms 689.995676ms 693.544497ms 695.630558ms 697.717604ms 701.68394ms 702.937128ms 703.023722ms 703.542963ms 705.023427ms 715.107841ms 717.394369ms 719.670779ms 722.170276ms 726.357633ms 748.165257ms 750.701325ms 753.682789ms 755.998656ms 759.738742ms 759.798764ms 759.809225ms 772.420982ms 772.747686ms 772.780646ms 775.674538ms 782.316102ms 783.6669ms 784.130273ms 784.722388ms 785.379262ms 786.046446ms 787.143937ms 787.178288ms 787.442263ms 789.334557ms 791.65548ms 795.399966ms 796.009691ms 798.73549ms 801.107605ms 801.51392ms 801.873667ms 809.851937ms 811.500973ms 811.562522ms 812.715268ms 812.904544ms 814.80401ms 818.083572ms 818.470429ms 824.855534ms 826.037542ms 826.591933ms 831.544571ms 833.73467ms 835.300714ms 835.996131ms 836.394173ms 839.39803ms 840.605963ms 841.193276ms 846.526445ms 846.832206ms 853.290364ms 855.642833ms 859.196437ms 866.575198ms 869.70503ms 869.720955ms 871.42365ms 872.91767ms 874.63433ms 875.237402ms 876.949279ms 877.85791ms 882.177761ms 882.484393ms 890.313223ms 898.976594ms 904.313533ms 910.408994ms 911.095127ms 914.709013ms 914.941563ms 918.388297ms 918.959184ms 923.643824ms 923.746273ms 932.350393ms 934.937125ms 935.048774ms 938.426436ms 953.279803ms 956.528699ms 956.736834ms 959.385918ms 962.886506ms 963.528658ms 969.771066ms 971.136147ms 972.995377ms 973.453685ms 979.737499ms 981.376861ms 986.448297ms 987.580283ms 989.32437ms 995.511457ms 998.352151ms 1.001834733s 1.003099735s 1.005155119s 1.01357695s 1.014684082s 1.017623822s 1.018500674s 1.020492589s 1.026596101s 1.02838415s 1.031317602s 1.04167248s 1.048970975s 1.049094188s 1.052148243s 1.062871559s 1.063394428s 1.070686501s 1.070956537s 1.073063645s 1.07557263s 1.082199154s 1.087871571s 1.088184473s 1.088666642s 1.094103051s 1.102435774s 1.11109228s 1.116884772s 1.117265932s 1.117972243s 1.118504613s 1.13063702s 1.132639689s 1.13281695s 1.143956353s 1.152534143s 1.156479204s 1.16205023s 1.174236454s 1.174624248s 1.178528966s 1.181869742s 1.186192802s 1.240445304s 1.512284607s 1.64786525s 1.668104364s 2.00670789s 2.054640345s 2.059911077s 2.080705709s 2.166917103s 2.18865952s 2.190122909s 2.262724649s 2.26550622s 2.267439311s 2.388535119s 2.394441074s 2.403251663s 2.418381538s 2.424874494s 2.433817405s 2.434341556s 2.444187008s 2.461425299s 2.468930523s 2.945403168s 3.07477766s 3.088412928s 3.602352877s 3.653864444s] Sep 21 10:36:19.980: INFO: 50 %ile: 914.709013ms Sep 21 10:36:19.981: INFO: 90 %ile: 2.18865952s Sep 21 10:36:19.981: INFO: 99 %ile: 3.602352877s Sep 21 10:36:19.981: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:36:19.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1638" for this suite. • [SLOW TEST:18.079 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":68,"skipped":1208,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:36:20.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3393.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3393.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 10:36:26.224: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.277: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.284: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.303: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.368: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.425: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.453: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.482: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:26.569: INFO: Lookups using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local] Sep 21 10:36:31.579: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.627: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.644: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.693: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.715: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.742: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.759: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.771: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:31.854: INFO: Lookups using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local] Sep 21 10:36:36.597: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.613: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.619: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.714: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.769: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.857: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.867: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.871: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:36.902: INFO: Lookups using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local] Sep 21 10:36:41.582: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.598: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.632: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.659: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.767: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.772: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.801: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.844: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:41.856: INFO: Lookups using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local] Sep 21 10:36:46.577: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.582: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.595: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.607: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.611: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.615: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.619: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:46.627: INFO: Lookups using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local] Sep 21 10:36:51.577: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.582: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.586: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.590: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.609: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.613: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.616: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.620: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local from pod dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5: the server could not find the requested resource (get pods dns-test-06357182-9437-406a-bae4-ecc1b91368f5) Sep 21 10:36:51.627: INFO: Lookups using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3393.svc.cluster.local jessie_udp@dns-test-service-2.dns-3393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3393.svc.cluster.local] Sep 21 10:36:56.624: INFO: DNS probes using dns-3393/dns-test-06357182-9437-406a-bae4-ecc1b91368f5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:36:57.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3393" for this suite. • [SLOW TEST:37.347 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":69,"skipped":1217,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:36:57.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 21 10:36:57.603: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix057632407/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:36:58.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2341" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":70,"skipped":1225,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:36:58.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:37:16.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1515" for this suite. • [SLOW TEST:17.428 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":71,"skipped":1236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:37:16.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:37:16.182: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:37:20.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8320" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1295,"failed":0} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:37:20.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 21 10:37:20.499: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 21 10:37:25.505: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:37:26.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3486" for this suite. • [SLOW TEST:6.192 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":73,"skipped":1296,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:37:26.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-20ab1a8c-5c18-4878-b543-2c5e4fdde355 STEP: Creating a pod to test consume configMaps Sep 21 10:37:26.757: INFO: Waiting up to 5m0s for pod "pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4" in namespace "configmap-1292" to be "Succeeded or Failed" Sep 21 10:37:26.771: INFO: Pod "pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.458383ms Sep 21 10:37:28.779: INFO: Pod "pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022499091s Sep 21 10:37:30.787: INFO: Pod "pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03062569s Sep 21 10:37:32.795: INFO: Pod "pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037947478s STEP: Saw pod success Sep 21 10:37:32.795: INFO: Pod "pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4" satisfied condition "Succeeded or Failed" Sep 21 10:37:32.799: INFO: Trying to get logs from node kali-worker pod pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4 container configmap-volume-test: STEP: delete the pod Sep 21 10:37:32.900: INFO: Waiting for pod pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4 to disappear Sep 21 10:37:32.907: INFO: Pod pod-configmaps-57273a7e-a0ac-4c45-8e72-9ed6b9a392f4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:37:32.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1292" for this suite. • [SLOW TEST:6.379 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1305,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:37:32.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 21 10:37:32.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8569' Sep 21 10:37:34.348: INFO: stderr: "" Sep 21 10:37:34.348: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 21 10:37:39.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8569 -o json' Sep 21 10:37:40.600: INFO: stderr: "" Sep 21 10:37:40.601: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-21T10:37:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-21T10:37:34Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.127\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-21T10:37:37Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8569\",\n \"resourceVersion\": \"2053972\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8569/pods/e2e-test-httpd-pod\",\n \"uid\": \"fa3135bf-0037-4442-8ecb-663b6831f856\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lr2gn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lr2gn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lr2gn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T10:37:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T10:37:37Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T10:37:37Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T10:37:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b7709638c4fe37fb686a8d2907cedbfdf7a51efcdba0cd765a1fa224e5ff005d\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-21T10:37:36Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.127\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.127\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-21T10:37:34Z\"\n }\n}\n" STEP: replace the image in the pod Sep 21 10:37:40.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8569' Sep 21 10:37:43.315: INFO: stderr: "" Sep 21 10:37:43.315: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Sep 21 10:37:43.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8569' Sep 21 10:37:47.246: INFO: stderr: "" Sep 21 10:37:47.246: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:37:47.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8569" for this suite. • [SLOW TEST:14.357 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":75,"skipped":1308,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:37:47.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-swbl STEP: Creating a pod to test atomic-volume-subpath Sep 21 10:37:47.397: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-swbl" in namespace "subpath-9743" to be "Succeeded or Failed" Sep 21 10:37:47.422: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Pending", Reason="", readiness=false. Elapsed: 24.386577ms Sep 21 10:37:49.430: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032885787s Sep 21 10:37:51.438: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 4.040985342s Sep 21 10:37:53.447: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 6.049492484s Sep 21 10:37:55.455: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 8.057330071s Sep 21 10:37:57.462: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 10.064717238s Sep 21 10:37:59.472: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 12.074264442s Sep 21 10:38:01.484: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 14.086458812s Sep 21 10:38:03.492: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 16.094841909s Sep 21 10:38:05.501: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 18.104006425s Sep 21 10:38:07.509: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 20.111451757s Sep 21 10:38:09.517: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Running", Reason="", readiness=true. Elapsed: 22.120202592s Sep 21 10:38:11.525: INFO: Pod "pod-subpath-test-configmap-swbl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.127805687s STEP: Saw pod success Sep 21 10:38:11.525: INFO: Pod "pod-subpath-test-configmap-swbl" satisfied condition "Succeeded or Failed" Sep 21 10:38:11.530: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-swbl container test-container-subpath-configmap-swbl: STEP: delete the pod Sep 21 10:38:11.667: INFO: Waiting for pod pod-subpath-test-configmap-swbl to disappear Sep 21 10:38:11.673: INFO: Pod pod-subpath-test-configmap-swbl no longer exists STEP: Deleting pod pod-subpath-test-configmap-swbl Sep 21 10:38:11.673: INFO: Deleting pod "pod-subpath-test-configmap-swbl" in namespace "subpath-9743" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:38:11.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9743" for this suite. • [SLOW TEST:24.408 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":76,"skipped":1311,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:38:11.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-204ba67d-8cc0-4283-92fc-ab53f8628c63 STEP: Creating secret with name s-test-opt-upd-fd9ec969-484f-4ac8-aef1-42ce8f0db152 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-204ba67d-8cc0-4283-92fc-ab53f8628c63 STEP: Updating secret s-test-opt-upd-fd9ec969-484f-4ac8-aef1-42ce8f0db152 STEP: Creating secret with name s-test-opt-create-acb3a165-ff41-4270-a907-b50fb10846f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:38:19.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3914" for this suite. • [SLOW TEST:8.283 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1332,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:38:19.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 21 10:38:20.072: INFO: Waiting up to 5m0s for pod "pod-c6e86d4f-7557-4092-99da-d6fa464b02e6" in namespace "emptydir-1319" to be "Succeeded or Failed" Sep 21 10:38:20.107: INFO: Pod "pod-c6e86d4f-7557-4092-99da-d6fa464b02e6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.68175ms Sep 21 10:38:22.116: INFO: Pod "pod-c6e86d4f-7557-4092-99da-d6fa464b02e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043367701s Sep 21 10:38:24.125: INFO: Pod "pod-c6e86d4f-7557-4092-99da-d6fa464b02e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052410909s STEP: Saw pod success Sep 21 10:38:24.125: INFO: Pod "pod-c6e86d4f-7557-4092-99da-d6fa464b02e6" satisfied condition "Succeeded or Failed" Sep 21 10:38:24.130: INFO: Trying to get logs from node kali-worker2 pod pod-c6e86d4f-7557-4092-99da-d6fa464b02e6 container test-container: STEP: delete the pod Sep 21 10:38:24.162: INFO: Waiting for pod pod-c6e86d4f-7557-4092-99da-d6fa464b02e6 to disappear Sep 21 10:38:24.172: INFO: Pod pod-c6e86d4f-7557-4092-99da-d6fa464b02e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:38:24.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1319" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":78,"skipped":1348,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:38:24.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 21 10:38:38.851: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 21 10:38:40.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 10:38:42.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736281518, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 10:38:45.916: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:38:45.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:38:47.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7459" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:22.983 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":79,"skipped":1350,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:38:47.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 21 10:38:49.005: INFO: Pod name wrapped-volume-race-0a2da180-e1eb-4995-8990-eed1cd9b3628: Found 0 pods out of 5 Sep 21 10:38:54.024: INFO: Pod name wrapped-volume-race-0a2da180-e1eb-4995-8990-eed1cd9b3628: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0a2da180-e1eb-4995-8990-eed1cd9b3628 in namespace emptydir-wrapper-2816, will wait for the garbage collector to delete the pods Sep 21 10:39:08.618: INFO: Deleting ReplicationController wrapped-volume-race-0a2da180-e1eb-4995-8990-eed1cd9b3628 took: 9.57092ms Sep 21 10:39:09.119: INFO: Terminating ReplicationController wrapped-volume-race-0a2da180-e1eb-4995-8990-eed1cd9b3628 pods took: 501.01222ms STEP: Creating RC which spawns configmap-volume pods Sep 21 10:39:23.364: INFO: Pod name wrapped-volume-race-60029450-a3bf-4014-b841-7112e7233db0: Found 0 pods out of 5 Sep 21 10:39:28.389: INFO: Pod name wrapped-volume-race-60029450-a3bf-4014-b841-7112e7233db0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-60029450-a3bf-4014-b841-7112e7233db0 in namespace emptydir-wrapper-2816, will wait for the garbage collector to delete the pods Sep 21 10:39:42.532: INFO: Deleting ReplicationController wrapped-volume-race-60029450-a3bf-4014-b841-7112e7233db0 took: 10.097193ms Sep 21 10:39:43.033: INFO: Terminating ReplicationController wrapped-volume-race-60029450-a3bf-4014-b841-7112e7233db0 pods took: 501.20992ms STEP: Creating RC which spawns configmap-volume pods Sep 21 10:39:53.385: INFO: Pod name wrapped-volume-race-303c6343-fbd2-49f2-9353-5d367cf87bad: Found 0 pods out of 5 Sep 21 10:39:58.411: INFO: Pod name wrapped-volume-race-303c6343-fbd2-49f2-9353-5d367cf87bad: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-303c6343-fbd2-49f2-9353-5d367cf87bad in namespace emptydir-wrapper-2816, will wait for the garbage collector to delete the pods Sep 21 10:40:13.518: INFO: Deleting ReplicationController wrapped-volume-race-303c6343-fbd2-49f2-9353-5d367cf87bad took: 10.339568ms Sep 21 10:40:14.019: INFO: Terminating ReplicationController wrapped-volume-race-303c6343-fbd2-49f2-9353-5d367cf87bad pods took: 501.01118ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:40:23.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2816" for this suite. • [SLOW TEST:96.746 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":80,"skipped":1365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:40:23.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 21 10:40:24.020: INFO: Waiting up to 5m0s for pod "pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb" in namespace "emptydir-9643" to be "Succeeded or Failed" Sep 21 10:40:24.059: INFO: Pod "pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb": Phase="Pending", Reason="", readiness=false. Elapsed: 39.224533ms Sep 21 10:40:26.064: INFO: Pod "pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044068082s Sep 21 10:40:28.206: INFO: Pod "pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185456537s Sep 21 10:40:30.213: INFO: Pod "pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192705138s STEP: Saw pod success Sep 21 10:40:30.213: INFO: Pod "pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb" satisfied condition "Succeeded or Failed" Sep 21 10:40:30.219: INFO: Trying to get logs from node kali-worker2 pod pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb container test-container: STEP: delete the pod Sep 21 10:40:30.289: INFO: Waiting for pod pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb to disappear Sep 21 10:40:30.324: INFO: Pod pod-7baaeb56-bd5e-4ce6-908d-ccef00ae5deb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:40:30.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9643" for this suite. • [SLOW TEST:6.437 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:40:30.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:40:30.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836" in namespace "downward-api-8091" to be "Succeeded or Failed" Sep 21 10:40:30.530: INFO: Pod "downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836": Phase="Pending", Reason="", readiness=false. Elapsed: 26.029457ms Sep 21 10:40:32.541: INFO: Pod "downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037162431s Sep 21 10:40:34.549: INFO: Pod "downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836": Phase="Running", Reason="", readiness=true. Elapsed: 4.045616928s Sep 21 10:40:36.557: INFO: Pod "downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053790918s STEP: Saw pod success Sep 21 10:40:36.558: INFO: Pod "downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836" satisfied condition "Succeeded or Failed" Sep 21 10:40:36.562: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836 container client-container: STEP: delete the pod Sep 21 10:40:36.591: INFO: Waiting for pod downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836 to disappear Sep 21 10:40:36.612: INFO: Pod downwardapi-volume-72a40fdf-f63a-4791-b0af-11e06a162836 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:40:36.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8091" for this suite. • [SLOW TEST:6.293 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:40:36.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 in namespace container-probe-8073 Sep 21 10:40:40.824: INFO: Started pod liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 in namespace container-probe-8073 STEP: checking the pod's current state and verifying that restartCount is present Sep 21 10:40:40.829: INFO: Initial restart count of pod liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 is 0 Sep 21 10:40:56.986: INFO: Restart count of pod container-probe-8073/liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 is now 1 (16.156014778s elapsed) Sep 21 10:41:17.063: INFO: Restart count of pod container-probe-8073/liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 is now 2 (36.23325832s elapsed) Sep 21 10:41:37.140: INFO: Restart count of pod container-probe-8073/liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 is now 3 (56.310656026s elapsed) Sep 21 10:41:59.630: INFO: Restart count of pod container-probe-8073/liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 is now 4 (1m18.800524423s elapsed) Sep 21 10:43:11.967: INFO: Restart count of pod container-probe-8073/liveness-c5d56323-2da5-4c55-8fe8-c9b364e06e60 is now 5 (2m31.137244507s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:43:12.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8073" for this suite. • [SLOW TEST:155.372 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1487,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:43:12.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 21 10:43:12.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 21 10:43:12.383: INFO: Waiting for terminating namespaces to be deleted... Sep 21 10:43:12.388: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 21 10:43:12.396: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:43:12.396: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 10:43:12.396: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:43:12.396: INFO: Container kube-proxy ready: true, restart count 0 Sep 21 10:43:12.396: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 21 10:43:12.405: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:43:12.405: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 10:43:12.405: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 10:43:12.405: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-eaf401cb-cbe3-4ee7-85e2-f2b2d37d527a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-eaf401cb-cbe3-4ee7-85e2-f2b2d37d527a off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-eaf401cb-cbe3-4ee7-85e2-f2b2d37d527a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:43:20.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5298" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.633 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":84,"skipped":1497,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:43:20.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-c3821155-a4bd-4664-91a7-f15e8fca78a0 STEP: Creating a pod to test consume configMaps Sep 21 10:43:20.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4" in namespace "projected-2397" to be "Succeeded or Failed" Sep 21 10:43:20.802: INFO: Pod "pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.836788ms Sep 21 10:43:22.810: INFO: Pod "pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019398574s Sep 21 10:43:24.818: INFO: Pod "pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026972369s STEP: Saw pod success Sep 21 10:43:24.818: INFO: Pod "pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4" satisfied condition "Succeeded or Failed" Sep 21 10:43:24.822: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4 container projected-configmap-volume-test: STEP: delete the pod Sep 21 10:43:24.878: INFO: Waiting for pod pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4 to disappear Sep 21 10:43:24.918: INFO: Pod pod-projected-configmaps-83a93830-cd8f-4480-acaf-e7f9f7b8d5f4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:43:24.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2397" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1519,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:43:24.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3534/secret-test-4a5bfeaa-9247-44b5-94c2-df89e8785dee STEP: Creating a pod to test consume secrets Sep 21 10:43:25.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c" in namespace "secrets-3534" to be "Succeeded or Failed" Sep 21 10:43:25.084: INFO: Pod "pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.881312ms Sep 21 10:43:27.191: INFO: Pod "pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127084411s Sep 21 10:43:29.199: INFO: Pod "pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c": Phase="Running", Reason="", readiness=true. Elapsed: 4.135081402s Sep 21 10:43:31.207: INFO: Pod "pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14299119s STEP: Saw pod success Sep 21 10:43:31.207: INFO: Pod "pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c" satisfied condition "Succeeded or Failed" Sep 21 10:43:31.213: INFO: Trying to get logs from node kali-worker pod pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c container env-test: STEP: delete the pod Sep 21 10:43:31.264: INFO: Waiting for pod pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c to disappear Sep 21 10:43:31.273: INFO: Pod pod-configmaps-8644f2e6-27f9-4d72-be9a-57838743656c no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:43:31.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3534" for this suite. • [SLOW TEST:6.353 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1529,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:43:31.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-lqld7 in namespace proxy-796 I0921 10:43:31.458481 10 runners.go:190] Created replication controller with name: proxy-service-lqld7, namespace: proxy-796, replica count: 1 I0921 10:43:32.510458 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:43:33.511395 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 10:43:34.512801 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0921 10:43:35.513747 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0921 10:43:36.514402 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0921 10:43:37.515396 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0921 10:43:38.516574 10 runners.go:190] proxy-service-lqld7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 10:43:38.526: INFO: setup took 7.155996011s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 21 10:43:38.539: INFO: (0) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 11.208374ms) Sep 21 10:43:38.539: INFO: (0) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 11.981226ms) Sep 21 10:43:38.539: INFO: (0) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 11.508447ms) Sep 21 10:43:38.539: INFO: (0) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 11.65629ms) Sep 21 10:43:38.540: INFO: (0) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 12.23773ms) Sep 21 10:43:38.540: INFO: (0) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 12.983377ms) Sep 21 10:43:38.541: INFO: (0) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 13.111573ms) Sep 21 10:43:38.542: INFO: (0) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testt... (200; 14.568545ms) Sep 21 10:43:38.543: INFO: (0) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 15.06936ms) Sep 21 10:43:38.544: INFO: (0) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 16.145272ms) Sep 21 10:43:38.546: INFO: (0) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 18.20095ms) Sep 21 10:43:38.546: INFO: (0) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 18.646409ms) Sep 21 10:43:38.546: INFO: (0) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 18.357277ms) Sep 21 10:43:38.546: INFO: (0) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 18.551221ms) Sep 21 10:43:38.546: INFO: (0) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: testt... (200; 7.36007ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 7.538818ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 7.503413ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 7.429714ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 8.519073ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 7.704562ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 8.19553ms) Sep 21 10:43:38.555: INFO: (1) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: test (200; 6.038278ms) Sep 21 10:43:38.563: INFO: (2) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.401865ms) Sep 21 10:43:38.563: INFO: (2) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:1080/proxy/: t... (200; 6.55925ms) Sep 21 10:43:38.563: INFO: (2) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 6.562318ms) Sep 21 10:43:38.563: INFO: (2) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 6.480585ms) Sep 21 10:43:38.563: INFO: (2) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.904101ms) Sep 21 10:43:38.563: INFO: (2) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 7.119787ms) Sep 21 10:43:38.564: INFO: (2) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 7.478252ms) Sep 21 10:43:38.564: INFO: (2) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtestt... (200; 4.78313ms) Sep 21 10:43:38.574: INFO: (3) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 5.774732ms) Sep 21 10:43:38.574: INFO: (3) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 5.762602ms) Sep 21 10:43:38.574: INFO: (3) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 6.288222ms) Sep 21 10:43:38.575: INFO: (3) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 7.621578ms) Sep 21 10:43:38.575: INFO: (3) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 7.367989ms) Sep 21 10:43:38.581: INFO: (4) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 5.298251ms) Sep 21 10:43:38.581: INFO: (4) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 5.303784ms) Sep 21 10:43:38.581: INFO: (4) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 5.604185ms) Sep 21 10:43:38.581: INFO: (4) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 5.433748ms) Sep 21 10:43:38.582: INFO: (4) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 6.057252ms) Sep 21 10:43:38.582: INFO: (4) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 5.878137ms) Sep 21 10:43:38.582: INFO: (4) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: testt... (200; 7.670812ms) Sep 21 10:43:38.588: INFO: (5) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 4.365599ms) Sep 21 10:43:38.589: INFO: (5) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 5.41047ms) Sep 21 10:43:38.590: INFO: (5) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:1080/proxy/: t... (200; 5.626547ms) Sep 21 10:43:38.590: INFO: (5) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.337736ms) Sep 21 10:43:38.591: INFO: (5) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtest (200; 7.06361ms) Sep 21 10:43:38.592: INFO: (5) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 7.201327ms) Sep 21 10:43:38.592: INFO: (5) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 7.527826ms) Sep 21 10:43:38.592: INFO: (5) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 7.735293ms) Sep 21 10:43:38.592: INFO: (5) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 7.934893ms) Sep 21 10:43:38.592: INFO: (5) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 8.012041ms) Sep 21 10:43:38.592: INFO: (5) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 8.414522ms) Sep 21 10:43:38.593: INFO: (5) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 8.256248ms) Sep 21 10:43:38.593: INFO: (5) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 9.143947ms) Sep 21 10:43:38.593: INFO: (5) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: testtest (200; 4.669714ms) Sep 21 10:43:38.599: INFO: (6) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 5.456654ms) Sep 21 10:43:38.599: INFO: (6) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 5.490721ms) Sep 21 10:43:38.600: INFO: (6) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 6.07766ms) Sep 21 10:43:38.600: INFO: (6) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 6.134335ms) Sep 21 10:43:38.601: INFO: (6) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.588649ms) Sep 21 10:43:38.601: INFO: (6) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.621127ms) Sep 21 10:43:38.601: INFO: (6) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 6.977617ms) Sep 21 10:43:38.601: INFO: (6) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 7.274533ms) Sep 21 10:43:38.602: INFO: (6) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:1080/proxy/: t... (200; 7.168525ms) Sep 21 10:43:38.602: INFO: (6) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 7.970618ms) Sep 21 10:43:38.602: INFO: (6) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 7.91626ms) Sep 21 10:43:38.602: INFO: (6) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: test (200; 4.298909ms) Sep 21 10:43:38.608: INFO: (7) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 4.510667ms) Sep 21 10:43:38.608: INFO: (7) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 5.04379ms) Sep 21 10:43:38.608: INFO: (7) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: t... (200; 6.296579ms) Sep 21 10:43:38.610: INFO: (7) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 5.768205ms) Sep 21 10:43:38.610: INFO: (7) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 6.529971ms) Sep 21 10:43:38.610: INFO: (7) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 6.683662ms) Sep 21 10:43:38.610: INFO: (7) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.67735ms) Sep 21 10:43:38.611: INFO: (7) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 7.222666ms) Sep 21 10:43:38.611: INFO: (7) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtest (200; 6.523837ms) Sep 21 10:43:38.619: INFO: (8) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testt... (200; 7.234193ms) Sep 21 10:43:38.620: INFO: (8) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 7.634949ms) Sep 21 10:43:38.620: INFO: (8) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: t... (200; 7.30972ms) Sep 21 10:43:38.628: INFO: (9) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 7.627318ms) Sep 21 10:43:38.628: INFO: (9) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtest (200; 8.245336ms) Sep 21 10:43:38.629: INFO: (9) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 8.486438ms) Sep 21 10:43:38.631: INFO: (9) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 9.89071ms) Sep 21 10:43:38.631: INFO: (9) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 9.986148ms) Sep 21 10:43:38.635: INFO: (10) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 4.106193ms) Sep 21 10:43:38.635: INFO: (10) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 4.274799ms) Sep 21 10:43:38.636: INFO: (10) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 4.71999ms) Sep 21 10:43:38.636: INFO: (10) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 4.980467ms) Sep 21 10:43:38.637: INFO: (10) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 5.277991ms) Sep 21 10:43:38.637: INFO: (10) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: testt... (200; 6.478277ms) Sep 21 10:43:38.638: INFO: (10) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 6.509556ms) Sep 21 10:43:38.638: INFO: (10) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 7.010702ms) Sep 21 10:43:38.638: INFO: (10) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 6.787129ms) Sep 21 10:43:38.639: INFO: (10) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 7.183474ms) Sep 21 10:43:38.639: INFO: (10) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 7.571705ms) Sep 21 10:43:38.639: INFO: (10) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 7.984384ms) Sep 21 10:43:38.640: INFO: (10) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 8.453218ms) Sep 21 10:43:38.643: INFO: (11) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 3.301033ms) Sep 21 10:43:38.644: INFO: (11) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 4.334194ms) Sep 21 10:43:38.645: INFO: (11) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 5.166074ms) Sep 21 10:43:38.645: INFO: (11) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtest (200; 16.830279ms) Sep 21 10:43:38.658: INFO: (11) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 17.947858ms) Sep 21 10:43:38.658: INFO: (11) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 17.935797ms) Sep 21 10:43:38.658: INFO: (11) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 18.032267ms) Sep 21 10:43:38.659: INFO: (11) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 18.437596ms) Sep 21 10:43:38.659: INFO: (11) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 18.829678ms) Sep 21 10:43:38.659: INFO: (11) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 19.051051ms) Sep 21 10:43:38.659: INFO: (11) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:1080/proxy/: t... (200; 18.890138ms) Sep 21 10:43:38.659: INFO: (11) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 19.181722ms) Sep 21 10:43:38.660: INFO: (11) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: t... (200; 8.723719ms) Sep 21 10:43:38.669: INFO: (12) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 9.047122ms) Sep 21 10:43:38.669: INFO: (12) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: test (200; 9.205343ms) Sep 21 10:43:38.670: INFO: (12) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtesttest (200; 9.409983ms) Sep 21 10:43:38.681: INFO: (13) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 9.917675ms) Sep 21 10:43:38.681: INFO: (13) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 9.74551ms) Sep 21 10:43:38.682: INFO: (13) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 9.962097ms) Sep 21 10:43:38.682: INFO: (13) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 10.158075ms) Sep 21 10:43:38.682: INFO: (13) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 10.634404ms) Sep 21 10:43:38.682: INFO: (13) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 10.527805ms) Sep 21 10:43:38.682: INFO: (13) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:1080/proxy/: t... (200; 10.516726ms) Sep 21 10:43:38.682: INFO: (13) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 10.527852ms) Sep 21 10:43:38.683: INFO: (13) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 10.793756ms) Sep 21 10:43:38.689: INFO: (14) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:160/proxy/: foo (200; 5.756772ms) Sep 21 10:43:38.691: INFO: (14) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 6.885252ms) Sep 21 10:43:38.691: INFO: (14) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 6.512992ms) Sep 21 10:43:38.691: INFO: (14) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:160/proxy/: foo (200; 6.821841ms) Sep 21 10:43:38.691: INFO: (14) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: t... (200; 7.229383ms) Sep 21 10:43:38.692: INFO: (14) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 7.297976ms) Sep 21 10:43:38.692: INFO: (14) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 7.837393ms) Sep 21 10:43:38.692: INFO: (14) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testt... (200; 4.356354ms) Sep 21 10:43:38.698: INFO: (15) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 4.949061ms) Sep 21 10:43:38.698: INFO: (15) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 5.087534ms) Sep 21 10:43:38.698: INFO: (15) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 5.230616ms) Sep 21 10:43:38.699: INFO: (15) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 5.729353ms) Sep 21 10:43:38.699: INFO: (15) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: testt... (200; 6.016565ms) Sep 21 10:43:38.708: INFO: (16) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 7.163772ms) Sep 21 10:43:38.708: INFO: (16) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 6.961777ms) Sep 21 10:43:38.708: INFO: (16) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 7.040607ms) Sep 21 10:43:38.709: INFO: (16) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 7.510665ms) Sep 21 10:43:38.709: INFO: (16) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtest (200; 7.93608ms) Sep 21 10:43:38.709: INFO: (16) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 8.358653ms) Sep 21 10:43:38.709: INFO: (16) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: test (200; 8.088592ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 8.185075ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:162/proxy/: bar (200; 8.093026ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/pods/http:proxy-service-lqld7-2brmm:1080/proxy/: t... (200; 8.133603ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname1/proxy/: foo (200; 8.594076ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 8.769787ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 8.478515ms) Sep 21 10:43:38.718: INFO: (17) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testtest (200; 3.800501ms) Sep 21 10:43:38.723: INFO: (18) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname2/proxy/: bar (200; 4.147134ms) Sep 21 10:43:38.724: INFO: (18) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 4.977408ms) Sep 21 10:43:38.724: INFO: (18) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:443/proxy/: t... (200; 5.165349ms) Sep 21 10:43:38.724: INFO: (18) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:162/proxy/: bar (200; 5.177399ms) Sep 21 10:43:38.724: INFO: (18) /api/v1/namespaces/proxy-796/services/proxy-service-lqld7:portname2/proxy/: bar (200; 5.40407ms) Sep 21 10:43:38.724: INFO: (18) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 5.81239ms) Sep 21 10:43:38.725: INFO: (18) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 5.66338ms) Sep 21 10:43:38.725: INFO: (18) /api/v1/namespaces/proxy-796/services/http:proxy-service-lqld7:portname1/proxy/: foo (200; 6.060581ms) Sep 21 10:43:38.725: INFO: (18) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:462/proxy/: tls qux (200; 6.002819ms) Sep 21 10:43:38.725: INFO: (18) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: testt... (200; 2.989124ms) Sep 21 10:43:38.730: INFO: (19) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm/proxy/: test (200; 3.53164ms) Sep 21 10:43:38.730: INFO: (19) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname1/proxy/: tls baz (200; 4.226605ms) Sep 21 10:43:38.731: INFO: (19) /api/v1/namespaces/proxy-796/services/https:proxy-service-lqld7:tlsportname2/proxy/: tls qux (200; 4.664918ms) Sep 21 10:43:38.732: INFO: (19) /api/v1/namespaces/proxy-796/pods/https:proxy-service-lqld7-2brmm:460/proxy/: tls baz (200; 5.34471ms) Sep 21 10:43:38.732: INFO: (19) /api/v1/namespaces/proxy-796/pods/proxy-service-lqld7-2brmm:1080/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-c3c31378-a701-4d98-8d65-b70ef93593de STEP: Creating a pod to test consume secrets Sep 21 10:43:53.422: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e" in namespace "projected-860" to be "Succeeded or Failed" Sep 21 10:43:53.447: INFO: Pod "pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.280297ms Sep 21 10:43:55.524: INFO: Pod "pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101792936s Sep 21 10:43:57.531: INFO: Pod "pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108596316s STEP: Saw pod success Sep 21 10:43:57.531: INFO: Pod "pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e" satisfied condition "Succeeded or Failed" Sep 21 10:43:57.535: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e container projected-secret-volume-test: STEP: delete the pod Sep 21 10:43:57.583: INFO: Waiting for pod pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e to disappear Sep 21 10:43:57.599: INFO: Pod pod-projected-secrets-c9f3bbcf-8e39-4b1f-8662-9aeb0b2e7a8e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:43:57.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-860" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:43:57.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:43:57.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3696" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":89,"skipped":1582,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:43:57.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:43:57.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40" in namespace "downward-api-6989" to be "Succeeded or Failed" Sep 21 10:43:57.924: INFO: Pod "downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40": Phase="Pending", Reason="", readiness=false. Elapsed: 9.086553ms Sep 21 10:43:59.967: INFO: Pod "downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051351172s Sep 21 10:44:01.992: INFO: Pod "downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076619738s STEP: Saw pod success Sep 21 10:44:01.992: INFO: Pod "downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40" satisfied condition "Succeeded or Failed" Sep 21 10:44:01.997: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40 container client-container: STEP: delete the pod Sep 21 10:44:02.076: INFO: Waiting for pod downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40 to disappear Sep 21 10:44:02.145: INFO: Pod downwardapi-volume-bf6e4e64-05bd-4470-9b80-aa0b1deebe40 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:02.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6989" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1582,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:02.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-81c84a76-875d-4791-bf3b-d1ff6a5ff544 STEP: Creating a pod to test consume secrets Sep 21 10:44:02.421: INFO: Waiting up to 5m0s for pod "pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492" in namespace "secrets-8194" to be "Succeeded or Failed" Sep 21 10:44:02.458: INFO: Pod "pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492": Phase="Pending", Reason="", readiness=false. Elapsed: 37.395476ms Sep 21 10:44:04.467: INFO: Pod "pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04602742s Sep 21 10:44:06.473: INFO: Pod "pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052363116s STEP: Saw pod success Sep 21 10:44:06.474: INFO: Pod "pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492" satisfied condition "Succeeded or Failed" Sep 21 10:44:06.478: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492 container secret-volume-test: STEP: delete the pod Sep 21 10:44:06.592: INFO: Waiting for pod pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492 to disappear Sep 21 10:44:06.599: INFO: Pod pod-secrets-06d6c97e-86c7-4d5a-a1b4-d94758d0c492 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:06.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8194" for this suite. STEP: Destroying namespace "secret-namespace-2978" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1592,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:06.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 21 10:44:06.752: INFO: created test-event-1 Sep 21 10:44:06.767: INFO: created test-event-2 Sep 21 10:44:06.772: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 21 10:44:06.779: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 21 10:44:06.819: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:06.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-594" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":92,"skipped":1606,"failed":0} SSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:06.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:07.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-325" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":93,"skipped":1612,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:07.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ca18ba68-0560-444b-85ad-b57630f57981 STEP: Creating a pod to test consume secrets Sep 21 10:44:07.382: INFO: Waiting up to 5m0s for pod "pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c" in namespace "secrets-7104" to be "Succeeded or Failed" Sep 21 10:44:07.408: INFO: Pod "pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.993022ms Sep 21 10:44:09.415: INFO: Pod "pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032558884s Sep 21 10:44:11.424: INFO: Pod "pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041090443s STEP: Saw pod success Sep 21 10:44:11.424: INFO: Pod "pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c" satisfied condition "Succeeded or Failed" Sep 21 10:44:11.430: INFO: Trying to get logs from node kali-worker pod pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c container secret-volume-test: STEP: delete the pod Sep 21 10:44:11.464: INFO: Waiting for pod pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c to disappear Sep 21 10:44:11.479: INFO: Pod pod-secrets-148f755f-68c9-48cf-bcf7-ae822db4e21c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:11.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7104" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1614,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:11.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8139" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":95,"skipped":1621,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:11.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:44:11.819: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:18.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7733" for this suite. • [SLOW TEST:6.600 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":96,"skipped":1625,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:18.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 21 10:44:18.377: INFO: Waiting up to 5m0s for pod "pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2" in namespace "emptydir-7164" to be "Succeeded or Failed" Sep 21 10:44:18.396: INFO: Pod "pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.519014ms Sep 21 10:44:20.404: INFO: Pod "pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026863748s Sep 21 10:44:23.559: INFO: Pod "pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.182418018s Sep 21 10:44:25.569: INFO: Pod "pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.192308803s STEP: Saw pod success Sep 21 10:44:25.570: INFO: Pod "pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2" satisfied condition "Succeeded or Failed" Sep 21 10:44:25.575: INFO: Trying to get logs from node kali-worker2 pod pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2 container test-container: STEP: delete the pod Sep 21 10:44:25.640: INFO: Waiting for pod pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2 to disappear Sep 21 10:44:25.679: INFO: Pod pod-32c2b6e6-cf6d-4120-b0c7-75a5abd338b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:44:25.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7164" for this suite. • [SLOW TEST:7.493 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1646,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:44:25.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2972 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2972 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2972 Sep 21 10:44:25.990: INFO: Found 0 stateful pods, waiting for 1 Sep 21 10:44:36.001: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 21 10:44:36.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 10:44:41.509: INFO: stderr: "I0921 10:44:41.354750 1145 log.go:181] (0x271c700) (0x271c770) Create stream\nI0921 10:44:41.359339 1145 log.go:181] (0x271c700) (0x271c770) Stream added, broadcasting: 1\nI0921 10:44:41.375037 1145 log.go:181] (0x271c700) Reply frame received for 1\nI0921 10:44:41.375970 1145 log.go:181] (0x271c700) (0x271c930) Create stream\nI0921 10:44:41.376084 1145 log.go:181] (0x271c700) (0x271c930) Stream added, broadcasting: 3\nI0921 10:44:41.377922 1145 log.go:181] (0x271c700) Reply frame received for 3\nI0921 10:44:41.378243 1145 log.go:181] (0x271c700) (0x271caf0) Create stream\nI0921 10:44:41.378325 1145 log.go:181] (0x271c700) (0x271caf0) Stream added, broadcasting: 5\nI0921 10:44:41.379408 1145 log.go:181] (0x271c700) Reply frame received for 5\nI0921 10:44:41.460701 1145 log.go:181] (0x271c700) Data frame received for 5\nI0921 10:44:41.460894 1145 log.go:181] (0x271caf0) (5) Data frame handling\nI0921 10:44:41.461220 1145 log.go:181] (0x271caf0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 10:44:41.488422 1145 log.go:181] (0x271c700) Data frame received for 3\nI0921 10:44:41.488723 1145 log.go:181] (0x271c930) (3) Data frame handling\nI0921 10:44:41.488968 1145 log.go:181] (0x271c930) (3) Data frame sent\nI0921 10:44:41.489160 1145 log.go:181] (0x271c700) Data frame received for 3\nI0921 10:44:41.489357 1145 log.go:181] (0x271c930) (3) Data frame handling\nI0921 10:44:41.489630 1145 log.go:181] (0x271c700) Data frame received for 5\nI0921 10:44:41.489778 1145 log.go:181] (0x271caf0) (5) Data frame handling\nI0921 10:44:41.490786 1145 log.go:181] (0x271c700) Data frame received for 1\nI0921 10:44:41.490867 1145 log.go:181] (0x271c770) (1) Data frame handling\nI0921 10:44:41.490944 1145 log.go:181] (0x271c770) (1) Data frame sent\nI0921 10:44:41.492590 1145 log.go:181] (0x271c700) (0x271c770) Stream removed, broadcasting: 1\nI0921 10:44:41.495620 1145 log.go:181] (0x271c700) Go away received\nI0921 10:44:41.498346 1145 log.go:181] (0x271c700) (0x271c770) Stream removed, broadcasting: 1\nI0921 10:44:41.498818 1145 log.go:181] (0x271c700) (0x271c930) Stream removed, broadcasting: 3\nI0921 10:44:41.499051 1145 log.go:181] (0x271c700) (0x271caf0) Stream removed, broadcasting: 5\n" Sep 21 10:44:41.510: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 10:44:41.511: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 10:44:41.518: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 21 10:44:51.528: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 21 10:44:51.528: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 10:44:51.574: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999934214s Sep 21 10:44:52.585: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.974502444s Sep 21 10:44:53.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964509785s Sep 21 10:44:54.602: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.955242044s Sep 21 10:44:55.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.947227861s Sep 21 10:44:56.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.938080433s Sep 21 10:44:57.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.928448815s Sep 21 10:44:58.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.919675457s Sep 21 10:44:59.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.911392057s Sep 21 10:45:00.654: INFO: Verifying statefulset ss doesn't scale past 1 for another 903.294927ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2972 Sep 21 10:45:01.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:45:03.116: INFO: stderr: "I0921 10:45:03.005707 1166 log.go:181] (0x2a98230) (0x2a982a0) Create stream\nI0921 10:45:03.009951 1166 log.go:181] (0x2a98230) (0x2a982a0) Stream added, broadcasting: 1\nI0921 10:45:03.019047 1166 log.go:181] (0x2a98230) Reply frame received for 1\nI0921 10:45:03.019567 1166 log.go:181] (0x2a98230) (0x2a98690) Create stream\nI0921 10:45:03.019642 1166 log.go:181] (0x2a98230) (0x2a98690) Stream added, broadcasting: 3\nI0921 10:45:03.020861 1166 log.go:181] (0x2a98230) Reply frame received for 3\nI0921 10:45:03.021065 1166 log.go:181] (0x2a98230) (0x3106070) Create stream\nI0921 10:45:03.021124 1166 log.go:181] (0x2a98230) (0x3106070) Stream added, broadcasting: 5\nI0921 10:45:03.022047 1166 log.go:181] (0x2a98230) Reply frame received for 5\nI0921 10:45:03.100402 1166 log.go:181] (0x2a98230) Data frame received for 3\nI0921 10:45:03.100687 1166 log.go:181] (0x2a98690) (3) Data frame handling\nI0921 10:45:03.100969 1166 log.go:181] (0x2a98230) Data frame received for 5\nI0921 10:45:03.101662 1166 log.go:181] (0x3106070) (5) Data frame handling\nI0921 10:45:03.102063 1166 log.go:181] (0x3106070) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0921 10:45:03.102290 1166 log.go:181] (0x2a98230) Data frame received for 1\nI0921 10:45:03.102466 1166 log.go:181] (0x2a982a0) (1) Data frame handling\nI0921 10:45:03.102607 1166 log.go:181] (0x2a98690) (3) Data frame sent\nI0921 10:45:03.102720 1166 log.go:181] (0x2a98230) Data frame received for 5\nI0921 10:45:03.102862 1166 log.go:181] (0x3106070) (5) Data frame handling\nI0921 10:45:03.103074 1166 log.go:181] (0x2a982a0) (1) Data frame sent\nI0921 10:45:03.103317 1166 log.go:181] (0x2a98230) Data frame received for 3\nI0921 10:45:03.103437 1166 log.go:181] (0x2a98690) (3) Data frame handling\nI0921 10:45:03.105048 1166 log.go:181] (0x2a98230) (0x2a982a0) Stream removed, broadcasting: 1\nI0921 10:45:03.106274 1166 log.go:181] (0x2a98230) Go away received\nI0921 10:45:03.108413 1166 log.go:181] (0x2a98230) (0x2a982a0) Stream removed, broadcasting: 1\nI0921 10:45:03.108666 1166 log.go:181] (0x2a98230) (0x2a98690) Stream removed, broadcasting: 3\nI0921 10:45:03.108893 1166 log.go:181] (0x2a98230) (0x3106070) Stream removed, broadcasting: 5\n" Sep 21 10:45:03.117: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 10:45:03.117: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 10:45:03.124: INFO: Found 1 stateful pods, waiting for 3 Sep 21 10:45:13.197: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 10:45:13.197: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 10:45:13.197: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Sep 21 10:45:23.136: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 10:45:23.136: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 10:45:23.136: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 21 10:45:23.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 10:45:24.676: INFO: stderr: "I0921 10:45:24.551213 1186 log.go:181] (0x301a000) (0x301a070) Create stream\nI0921 10:45:24.553999 1186 log.go:181] (0x301a000) (0x301a070) Stream added, broadcasting: 1\nI0921 10:45:24.566808 1186 log.go:181] (0x301a000) Reply frame received for 1\nI0921 10:45:24.567430 1186 log.go:181] (0x301a000) (0x301a230) Create stream\nI0921 10:45:24.567504 1186 log.go:181] (0x301a000) (0x301a230) Stream added, broadcasting: 3\nI0921 10:45:24.569251 1186 log.go:181] (0x301a000) Reply frame received for 3\nI0921 10:45:24.569747 1186 log.go:181] (0x301a000) (0x301a3f0) Create stream\nI0921 10:45:24.569870 1186 log.go:181] (0x301a000) (0x301a3f0) Stream added, broadcasting: 5\nI0921 10:45:24.571528 1186 log.go:181] (0x301a000) Reply frame received for 5\nI0921 10:45:24.661547 1186 log.go:181] (0x301a000) Data frame received for 1\nI0921 10:45:24.661899 1186 log.go:181] (0x301a000) Data frame received for 5\nI0921 10:45:24.662179 1186 log.go:181] (0x301a3f0) (5) Data frame handling\nI0921 10:45:24.662293 1186 log.go:181] (0x301a000) Data frame received for 3\nI0921 10:45:24.662382 1186 log.go:181] (0x301a230) (3) Data frame handling\nI0921 10:45:24.662557 1186 log.go:181] (0x301a070) (1) Data frame handling\nI0921 10:45:24.663296 1186 log.go:181] (0x301a070) (1) Data frame sent\nI0921 10:45:24.663401 1186 log.go:181] (0x301a3f0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 10:45:24.663815 1186 log.go:181] (0x301a230) (3) Data frame sent\nI0921 10:45:24.664128 1186 log.go:181] (0x301a000) Data frame received for 3\nI0921 10:45:24.664451 1186 log.go:181] (0x301a000) Data frame received for 5\nI0921 10:45:24.664707 1186 log.go:181] (0x301a000) (0x301a070) Stream removed, broadcasting: 1\nI0921 10:45:24.665705 1186 log.go:181] (0x301a230) (3) Data frame handling\nI0921 10:45:24.665927 1186 log.go:181] (0x301a3f0) (5) Data frame handling\nI0921 10:45:24.667426 1186 log.go:181] (0x301a000) Go away received\nI0921 10:45:24.669292 1186 log.go:181] (0x301a000) (0x301a070) Stream removed, broadcasting: 1\nI0921 10:45:24.669601 1186 log.go:181] (0x301a000) (0x301a230) Stream removed, broadcasting: 3\nI0921 10:45:24.669781 1186 log.go:181] (0x301a000) (0x301a3f0) Stream removed, broadcasting: 5\n" Sep 21 10:45:24.678: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 10:45:24.678: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 10:45:24.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 10:45:26.219: INFO: stderr: "I0921 10:45:26.052683 1206 log.go:181] (0x2a3e000) (0x2a3e070) Create stream\nI0921 10:45:26.056056 1206 log.go:181] (0x2a3e000) (0x2a3e070) Stream added, broadcasting: 1\nI0921 10:45:26.066937 1206 log.go:181] (0x2a3e000) Reply frame received for 1\nI0921 10:45:26.067376 1206 log.go:181] (0x2a3e000) (0x25c84d0) Create stream\nI0921 10:45:26.067433 1206 log.go:181] (0x2a3e000) (0x25c84d0) Stream added, broadcasting: 3\nI0921 10:45:26.068885 1206 log.go:181] (0x2a3e000) Reply frame received for 3\nI0921 10:45:26.069176 1206 log.go:181] (0x2a3e000) (0x294a460) Create stream\nI0921 10:45:26.069236 1206 log.go:181] (0x2a3e000) (0x294a460) Stream added, broadcasting: 5\nI0921 10:45:26.070833 1206 log.go:181] (0x2a3e000) Reply frame received for 5\nI0921 10:45:26.160630 1206 log.go:181] (0x2a3e000) Data frame received for 5\nI0921 10:45:26.160930 1206 log.go:181] (0x294a460) (5) Data frame handling\nI0921 10:45:26.161518 1206 log.go:181] (0x294a460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 10:45:26.202934 1206 log.go:181] (0x2a3e000) Data frame received for 3\nI0921 10:45:26.203145 1206 log.go:181] (0x25c84d0) (3) Data frame handling\nI0921 10:45:26.203399 1206 log.go:181] (0x25c84d0) (3) Data frame sent\nI0921 10:45:26.203584 1206 log.go:181] (0x2a3e000) Data frame received for 3\nI0921 10:45:26.203751 1206 log.go:181] (0x25c84d0) (3) Data frame handling\nI0921 10:45:26.203996 1206 log.go:181] (0x2a3e000) Data frame received for 5\nI0921 10:45:26.204304 1206 log.go:181] (0x294a460) (5) Data frame handling\nI0921 10:45:26.205432 1206 log.go:181] (0x2a3e000) Data frame received for 1\nI0921 10:45:26.205614 1206 log.go:181] (0x2a3e070) (1) Data frame handling\nI0921 10:45:26.205785 1206 log.go:181] (0x2a3e070) (1) Data frame sent\nI0921 10:45:26.208086 1206 log.go:181] (0x2a3e000) (0x2a3e070) Stream removed, broadcasting: 1\nI0921 10:45:26.208910 1206 log.go:181] (0x2a3e000) Go away received\nI0921 10:45:26.211460 1206 log.go:181] (0x2a3e000) (0x2a3e070) Stream removed, broadcasting: 1\nI0921 10:45:26.211690 1206 log.go:181] (0x2a3e000) (0x25c84d0) Stream removed, broadcasting: 3\nI0921 10:45:26.211855 1206 log.go:181] (0x2a3e000) (0x294a460) Stream removed, broadcasting: 5\n" Sep 21 10:45:26.221: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 10:45:26.221: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 10:45:26.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 10:45:27.901: INFO: stderr: "I0921 10:45:27.733812 1226 log.go:181] (0x28d1c70) (0x28d1ce0) Create stream\nI0921 10:45:27.737484 1226 log.go:181] (0x28d1c70) (0x28d1ce0) Stream added, broadcasting: 1\nI0921 10:45:27.760626 1226 log.go:181] (0x28d1c70) Reply frame received for 1\nI0921 10:45:27.761061 1226 log.go:181] (0x28d1c70) (0x26dca10) Create stream\nI0921 10:45:27.761128 1226 log.go:181] (0x28d1c70) (0x26dca10) Stream added, broadcasting: 3\nI0921 10:45:27.763068 1226 log.go:181] (0x28d1c70) Reply frame received for 3\nI0921 10:45:27.763573 1226 log.go:181] (0x28d1c70) (0x26880e0) Create stream\nI0921 10:45:27.763699 1226 log.go:181] (0x28d1c70) (0x26880e0) Stream added, broadcasting: 5\nI0921 10:45:27.765343 1226 log.go:181] (0x28d1c70) Reply frame received for 5\nI0921 10:45:27.851827 1226 log.go:181] (0x28d1c70) Data frame received for 5\nI0921 10:45:27.852024 1226 log.go:181] (0x26880e0) (5) Data frame handling\nI0921 10:45:27.852444 1226 log.go:181] (0x26880e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 10:45:27.882924 1226 log.go:181] (0x28d1c70) Data frame received for 3\nI0921 10:45:27.883096 1226 log.go:181] (0x28d1c70) Data frame received for 5\nI0921 10:45:27.883272 1226 log.go:181] (0x26880e0) (5) Data frame handling\nI0921 10:45:27.883392 1226 log.go:181] (0x26dca10) (3) Data frame handling\nI0921 10:45:27.883563 1226 log.go:181] (0x26dca10) (3) Data frame sent\nI0921 10:45:27.883682 1226 log.go:181] (0x28d1c70) Data frame received for 3\nI0921 10:45:27.883795 1226 log.go:181] (0x26dca10) (3) Data frame handling\nI0921 10:45:27.884872 1226 log.go:181] (0x28d1c70) Data frame received for 1\nI0921 10:45:27.885050 1226 log.go:181] (0x28d1ce0) (1) Data frame handling\nI0921 10:45:27.885233 1226 log.go:181] (0x28d1ce0) (1) Data frame sent\nI0921 10:45:27.887511 1226 log.go:181] (0x28d1c70) (0x28d1ce0) Stream removed, broadcasting: 1\nI0921 10:45:27.888574 1226 log.go:181] (0x28d1c70) Go away received\nI0921 10:45:27.892059 1226 log.go:181] (0x28d1c70) (0x28d1ce0) Stream removed, broadcasting: 1\nI0921 10:45:27.892416 1226 log.go:181] (0x28d1c70) (0x26dca10) Stream removed, broadcasting: 3\nI0921 10:45:27.892636 1226 log.go:181] (0x28d1c70) (0x26880e0) Stream removed, broadcasting: 5\n" Sep 21 10:45:27.903: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 10:45:27.903: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 10:45:27.903: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 10:45:27.910: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 21 10:45:37.929: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 21 10:45:37.930: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 21 10:45:37.930: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 21 10:45:37.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999984931s Sep 21 10:45:38.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993015539s Sep 21 10:45:39.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981752021s Sep 21 10:45:40.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970888057s Sep 21 10:45:41.996: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958229353s Sep 21 10:45:43.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.943911743s Sep 21 10:45:44.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.933279067s Sep 21 10:45:45.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.922430996s Sep 21 10:45:46.045: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.910866241s Sep 21 10:45:47.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 894.741468ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2972 Sep 21 10:45:48.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:45:49.507: INFO: stderr: "I0921 10:45:49.383039 1246 log.go:181] (0x2a34930) (0x2a349a0) Create stream\nI0921 10:45:49.385153 1246 log.go:181] (0x2a34930) (0x2a349a0) Stream added, broadcasting: 1\nI0921 10:45:49.398112 1246 log.go:181] (0x2a34930) Reply frame received for 1\nI0921 10:45:49.398941 1246 log.go:181] (0x2a34930) (0x2d00150) Create stream\nI0921 10:45:49.399043 1246 log.go:181] (0x2a34930) (0x2d00150) Stream added, broadcasting: 3\nI0921 10:45:49.400879 1246 log.go:181] (0x2a34930) Reply frame received for 3\nI0921 10:45:49.401106 1246 log.go:181] (0x2a34930) (0x29c0070) Create stream\nI0921 10:45:49.401163 1246 log.go:181] (0x2a34930) (0x29c0070) Stream added, broadcasting: 5\nI0921 10:45:49.402481 1246 log.go:181] (0x2a34930) Reply frame received for 5\nI0921 10:45:49.490536 1246 log.go:181] (0x2a34930) Data frame received for 5\nI0921 10:45:49.490871 1246 log.go:181] (0x2a34930) Data frame received for 3\nI0921 10:45:49.491149 1246 log.go:181] (0x2a34930) Data frame received for 1\nI0921 10:45:49.491308 1246 log.go:181] (0x2a349a0) (1) Data frame handling\nI0921 10:45:49.491619 1246 log.go:181] (0x2d00150) (3) Data frame handling\nI0921 10:45:49.492003 1246 log.go:181] (0x29c0070) (5) Data frame handling\nI0921 10:45:49.492613 1246 log.go:181] (0x29c0070) (5) Data frame sent\nI0921 10:45:49.492859 1246 log.go:181] (0x2d00150) (3) Data frame sent\nI0921 10:45:49.493059 1246 log.go:181] (0x2a349a0) (1) Data frame sent\nI0921 10:45:49.493271 1246 log.go:181] (0x2a34930) Data frame received for 3\nI0921 10:45:49.493384 1246 log.go:181] (0x2d00150) (3) Data frame handling\nI0921 10:45:49.493688 1246 log.go:181] (0x2a34930) Data frame received for 5\nI0921 10:45:49.493785 1246 log.go:181] (0x29c0070) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0921 10:45:49.495552 1246 log.go:181] (0x2a34930) (0x2a349a0) Stream removed, broadcasting: 1\nI0921 10:45:49.496931 1246 log.go:181] (0x2a34930) Go away received\nI0921 10:45:49.499214 1246 log.go:181] (0x2a34930) (0x2a349a0) Stream removed, broadcasting: 1\nI0921 10:45:49.499482 1246 log.go:181] (0x2a34930) (0x2d00150) Stream removed, broadcasting: 3\nI0921 10:45:49.499645 1246 log.go:181] (0x2a34930) (0x29c0070) Stream removed, broadcasting: 5\n" Sep 21 10:45:49.508: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 10:45:49.508: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 10:45:49.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:45:51.025: INFO: stderr: "I0921 10:45:50.931089 1266 log.go:181] (0x2a3c230) (0x2a3c2a0) Create stream\nI0921 10:45:50.932930 1266 log.go:181] (0x2a3c230) (0x2a3c2a0) Stream added, broadcasting: 1\nI0921 10:45:50.943149 1266 log.go:181] (0x2a3c230) Reply frame received for 1\nI0921 10:45:50.943867 1266 log.go:181] (0x2a3c230) (0x318c070) Create stream\nI0921 10:45:50.943953 1266 log.go:181] (0x2a3c230) (0x318c070) Stream added, broadcasting: 3\nI0921 10:45:50.945500 1266 log.go:181] (0x2a3c230) Reply frame received for 3\nI0921 10:45:50.945768 1266 log.go:181] (0x2a3c230) (0x2d98070) Create stream\nI0921 10:45:50.945850 1266 log.go:181] (0x2a3c230) (0x2d98070) Stream added, broadcasting: 5\nI0921 10:45:50.947128 1266 log.go:181] (0x2a3c230) Reply frame received for 5\nI0921 10:45:51.005503 1266 log.go:181] (0x2a3c230) Data frame received for 5\nI0921 10:45:51.005821 1266 log.go:181] (0x2a3c230) Data frame received for 3\nI0921 10:45:51.006160 1266 log.go:181] (0x2a3c230) Data frame received for 1\nI0921 10:45:51.006464 1266 log.go:181] (0x2a3c2a0) (1) Data frame handling\nI0921 10:45:51.006670 1266 log.go:181] (0x2d98070) (5) Data frame handling\nI0921 10:45:51.006992 1266 log.go:181] (0x318c070) (3) Data frame handling\nI0921 10:45:51.007714 1266 log.go:181] (0x2a3c2a0) (1) Data frame sent\nI0921 10:45:51.007953 1266 log.go:181] (0x318c070) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0921 10:45:51.008926 1266 log.go:181] (0x2d98070) (5) Data frame sent\nI0921 10:45:51.009478 1266 log.go:181] (0x2a3c230) Data frame received for 5\nI0921 10:45:51.009560 1266 log.go:181] (0x2d98070) (5) Data frame handling\nI0921 10:45:51.009711 1266 log.go:181] (0x2a3c230) Data frame received for 3\nI0921 10:45:51.009908 1266 log.go:181] (0x318c070) (3) Data frame handling\nI0921 10:45:51.012241 1266 log.go:181] (0x2a3c230) (0x2a3c2a0) Stream removed, broadcasting: 1\nI0921 10:45:51.012542 1266 log.go:181] (0x2a3c230) Go away received\nI0921 10:45:51.015452 1266 log.go:181] (0x2a3c230) (0x2a3c2a0) Stream removed, broadcasting: 1\nI0921 10:45:51.015918 1266 log.go:181] (0x2a3c230) (0x318c070) Stream removed, broadcasting: 3\nI0921 10:45:51.016085 1266 log.go:181] (0x2a3c230) (0x2d98070) Stream removed, broadcasting: 5\n" Sep 21 10:45:51.027: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 10:45:51.027: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 10:45:51.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:45:52.495: INFO: rc: 1 Sep 21 10:45:52.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Sep 21 10:46:02.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:46:03.779: INFO: rc: 1 Sep 21 10:46:03.780: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:46:13.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:46:15.041: INFO: rc: 1 Sep 21 10:46:15.041: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:46:25.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:46:26.233: INFO: rc: 1 Sep 21 10:46:26.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:46:36.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:46:37.532: INFO: rc: 1 Sep 21 10:46:37.533: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:46:47.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:46:48.736: INFO: rc: 1 Sep 21 10:46:48.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:46:58.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:47:00.044: INFO: rc: 1 Sep 21 10:47:00.044: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:47:10.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:47:11.248: INFO: rc: 1 Sep 21 10:47:11.249: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:47:21.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:47:22.539: INFO: rc: 1 Sep 21 10:47:22.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:47:32.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:47:33.750: INFO: rc: 1 Sep 21 10:47:33.750: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:47:43.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:47:44.950: INFO: rc: 1 Sep 21 10:47:44.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:47:54.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:47:56.208: INFO: rc: 1 Sep 21 10:47:56.208: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:48:06.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:48:07.469: INFO: rc: 1 Sep 21 10:48:07.469: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:48:17.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:48:18.714: INFO: rc: 1 Sep 21 10:48:18.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:48:28.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:48:29.920: INFO: rc: 1 Sep 21 10:48:29.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:48:39.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:48:41.103: INFO: rc: 1 Sep 21 10:48:41.104: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:48:51.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:48:52.465: INFO: rc: 1 Sep 21 10:48:52.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:49:02.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:49:03.701: INFO: rc: 1 Sep 21 10:49:03.702: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:49:13.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:49:14.911: INFO: rc: 1 Sep 21 10:49:14.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:49:24.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:49:26.094: INFO: rc: 1 Sep 21 10:49:26.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:49:36.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:49:37.342: INFO: rc: 1 Sep 21 10:49:37.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:49:47.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:49:48.593: INFO: rc: 1 Sep 21 10:49:48.594: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:49:58.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:49:59.788: INFO: rc: 1 Sep 21 10:49:59.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:50:09.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:50:11.031: INFO: rc: 1 Sep 21 10:50:11.032: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:50:21.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:50:22.272: INFO: rc: 1 Sep 21 10:50:22.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:50:32.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:50:33.452: INFO: rc: 1 Sep 21 10:50:33.453: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:50:43.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:50:44.718: INFO: rc: 1 Sep 21 10:50:44.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 10:50:54.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 10:50:55.903: INFO: rc: 1 Sep 21 10:50:55.904: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Sep 21 10:50:55.904: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 21 10:50:55.919: INFO: Deleting all statefulset in ns statefulset-2972 Sep 21 10:50:55.925: INFO: Scaling statefulset ss to 0 Sep 21 10:50:55.937: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 10:50:55.940: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:50:56.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2972" for this suite. • [SLOW TEST:390.260 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":98,"skipped":1665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:50:56.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 21 10:51:04.344: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 21 10:51:04.365: INFO: Pod pod-with-poststart-http-hook still exists Sep 21 10:51:06.366: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 21 10:51:06.402: INFO: Pod pod-with-poststart-http-hook still exists Sep 21 10:51:08.366: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 21 10:51:08.373: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:51:08.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8236" for this suite. • [SLOW TEST:12.358 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:51:08.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 21 10:51:08.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f -' Sep 21 10:51:11.045: INFO: stderr: "" Sep 21 10:51:11.045: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 21 10:51:11.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config diff -f -' Sep 21 10:51:15.312: INFO: rc: 1 Sep 21 10:51:15.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete -f -' Sep 21 10:51:16.520: INFO: stderr: "" Sep 21 10:51:16.520: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:51:16.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4796" for this suite. • [SLOW TEST:8.163 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":100,"skipped":1738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:51:16.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 21 10:51:16.667: INFO: created test-podtemplate-1 Sep 21 10:51:16.672: INFO: created test-podtemplate-2 Sep 21 10:51:16.678: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 21 10:51:16.685: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 21 10:51:16.974: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:51:16.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-885" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":101,"skipped":1788,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:51:16.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 21 10:51:17.145: INFO: Waiting up to 1m0s for all nodes to be ready Sep 21 10:52:17.229: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:52:17.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 21 10:52:23.356: INFO: found a healthy node: kali-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:52:43.572: INFO: pods created so far: [1 1 1] Sep 21 10:52:43.572: INFO: length of pods created so far: 3 Sep 21 10:52:53.589: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:53:00.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6722" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:53:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2730" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:103.842 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":102,"skipped":1789,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:53:00.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4472 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 21 10:53:00.917: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 21 10:53:01.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 10:53:03.057: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 10:53:05.058: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:53:07.065: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:53:09.058: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:53:11.059: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:53:13.061: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:53:15.058: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:53:17.058: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 21 10:53:17.068: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 21 10:53:21.190: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4472 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 10:53:21.190: INFO: >>> kubeConfig: /root/.kube/config I0921 10:53:21.294470 10 log.go:181] (0xb3e1a40) (0xb3e1ce0) Create stream I0921 10:53:21.294671 10 log.go:181] (0xb3e1a40) (0xb3e1ce0) Stream added, broadcasting: 1 I0921 10:53:21.301346 10 log.go:181] (0xb3e1a40) Reply frame received for 1 I0921 10:53:21.301552 10 log.go:181] (0xb3e1a40) (0xadf91f0) Create stream I0921 10:53:21.301634 10 log.go:181] (0xb3e1a40) (0xadf91f0) Stream added, broadcasting: 3 I0921 10:53:21.302967 10 log.go:181] (0xb3e1a40) Reply frame received for 3 I0921 10:53:21.303122 10 log.go:181] (0xb3e1a40) (0xaa3d500) Create stream I0921 10:53:21.303212 10 log.go:181] (0xb3e1a40) (0xaa3d500) Stream added, broadcasting: 5 I0921 10:53:21.304724 10 log.go:181] (0xb3e1a40) Reply frame received for 5 I0921 10:53:22.374191 10 log.go:181] (0xb3e1a40) Data frame received for 3 I0921 10:53:22.374472 10 log.go:181] (0xadf91f0) (3) Data frame handling I0921 10:53:22.374651 10 log.go:181] (0xadf91f0) (3) Data frame sent I0921 10:53:22.374839 10 log.go:181] (0xb3e1a40) Data frame received for 3 I0921 10:53:22.374995 10 log.go:181] (0xadf91f0) (3) Data frame handling I0921 10:53:22.375192 10 log.go:181] (0xb3e1a40) Data frame received for 5 I0921 10:53:22.375308 10 log.go:181] (0xaa3d500) (5) Data frame handling I0921 10:53:22.377740 10 log.go:181] (0xb3e1a40) Data frame received for 1 I0921 10:53:22.377936 10 log.go:181] (0xb3e1ce0) (1) Data frame handling I0921 10:53:22.378155 10 log.go:181] (0xb3e1ce0) (1) Data frame sent I0921 10:53:22.378306 10 log.go:181] (0xb3e1a40) (0xb3e1ce0) Stream removed, broadcasting: 1 I0921 10:53:22.378525 10 log.go:181] (0xb3e1a40) Go away received I0921 10:53:22.379010 10 log.go:181] (0xb3e1a40) (0xb3e1ce0) Stream removed, broadcasting: 1 I0921 10:53:22.379198 10 log.go:181] (0xb3e1a40) (0xadf91f0) Stream removed, broadcasting: 3 I0921 10:53:22.379366 10 log.go:181] (0xb3e1a40) (0xaa3d500) Stream removed, broadcasting: 5 Sep 21 10:53:22.380: INFO: Found all expected endpoints: [netserver-0] Sep 21 10:53:22.387: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.156 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4472 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 10:53:22.387: INFO: >>> kubeConfig: /root/.kube/config I0921 10:53:22.494082 10 log.go:181] (0x6eac000) (0x6eac620) Create stream I0921 10:53:22.494240 10 log.go:181] (0x6eac000) (0x6eac620) Stream added, broadcasting: 1 I0921 10:53:22.498387 10 log.go:181] (0x6eac000) Reply frame received for 1 I0921 10:53:22.498607 10 log.go:181] (0x6eac000) (0x6eadab0) Create stream I0921 10:53:22.498713 10 log.go:181] (0x6eac000) (0x6eadab0) Stream added, broadcasting: 3 I0921 10:53:22.500534 10 log.go:181] (0x6eac000) Reply frame received for 3 I0921 10:53:22.500714 10 log.go:181] (0x6eac000) (0x7d93340) Create stream I0921 10:53:22.500863 10 log.go:181] (0x6eac000) (0x7d93340) Stream added, broadcasting: 5 I0921 10:53:22.502530 10 log.go:181] (0x6eac000) Reply frame received for 5 I0921 10:53:23.569092 10 log.go:181] (0x6eac000) Data frame received for 3 I0921 10:53:23.569328 10 log.go:181] (0x6eadab0) (3) Data frame handling I0921 10:53:23.569427 10 log.go:181] (0x6eadab0) (3) Data frame sent I0921 10:53:23.569494 10 log.go:181] (0x6eac000) Data frame received for 3 I0921 10:53:23.569593 10 log.go:181] (0x6eadab0) (3) Data frame handling I0921 10:53:23.569745 10 log.go:181] (0x6eac000) Data frame received for 5 I0921 10:53:23.569977 10 log.go:181] (0x7d93340) (5) Data frame handling I0921 10:53:23.571256 10 log.go:181] (0x6eac000) Data frame received for 1 I0921 10:53:23.571464 10 log.go:181] (0x6eac620) (1) Data frame handling I0921 10:53:23.571694 10 log.go:181] (0x6eac620) (1) Data frame sent I0921 10:53:23.571905 10 log.go:181] (0x6eac000) (0x6eac620) Stream removed, broadcasting: 1 I0921 10:53:23.572253 10 log.go:181] (0x6eac000) Go away received I0921 10:53:23.572824 10 log.go:181] (0x6eac000) (0x6eac620) Stream removed, broadcasting: 1 I0921 10:53:23.572980 10 log.go:181] (0x6eac000) (0x6eadab0) Stream removed, broadcasting: 3 I0921 10:53:23.573109 10 log.go:181] (0x6eac000) (0x7d93340) Stream removed, broadcasting: 5 Sep 21 10:53:23.573: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:53:23.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4472" for this suite. • [SLOW TEST:22.750 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1800,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:53:23.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 21 10:53:23.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7773' Sep 21 10:53:26.305: INFO: stderr: "" Sep 21 10:53:26.305: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 21 10:53:26.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7773' Sep 21 10:53:27.550: INFO: stderr: "" Sep 21 10:53:27.550: INFO: stdout: "update-demo-nautilus-pxglm update-demo-nautilus-zv7nf " Sep 21 10:53:27.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pxglm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7773' Sep 21 10:53:28.866: INFO: stderr: "" Sep 21 10:53:28.866: INFO: stdout: "" Sep 21 10:53:28.866: INFO: update-demo-nautilus-pxglm is created but not running Sep 21 10:53:33.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7773' Sep 21 10:53:35.103: INFO: stderr: "" Sep 21 10:53:35.103: INFO: stdout: "update-demo-nautilus-pxglm update-demo-nautilus-zv7nf " Sep 21 10:53:35.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pxglm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7773' Sep 21 10:53:36.424: INFO: stderr: "" Sep 21 10:53:36.424: INFO: stdout: "true" Sep 21 10:53:36.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pxglm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7773' Sep 21 10:53:37.676: INFO: stderr: "" Sep 21 10:53:37.676: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 10:53:37.676: INFO: validating pod update-demo-nautilus-pxglm Sep 21 10:53:37.682: INFO: got data: { "image": "nautilus.jpg" } Sep 21 10:53:37.682: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 10:53:37.683: INFO: update-demo-nautilus-pxglm is verified up and running Sep 21 10:53:37.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zv7nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7773' Sep 21 10:53:38.963: INFO: stderr: "" Sep 21 10:53:38.963: INFO: stdout: "true" Sep 21 10:53:38.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zv7nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7773' Sep 21 10:53:40.214: INFO: stderr: "" Sep 21 10:53:40.214: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 10:53:40.214: INFO: validating pod update-demo-nautilus-zv7nf Sep 21 10:53:40.220: INFO: got data: { "image": "nautilus.jpg" } Sep 21 10:53:40.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 10:53:40.220: INFO: update-demo-nautilus-zv7nf is verified up and running STEP: using delete to clean up resources Sep 21 10:53:40.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7773' Sep 21 10:53:41.393: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 10:53:41.394: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 21 10:53:41.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7773' Sep 21 10:53:43.231: INFO: stderr: "No resources found in kubectl-7773 namespace.\n" Sep 21 10:53:43.231: INFO: stdout: "" Sep 21 10:53:43.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 21 10:53:44.465: INFO: stderr: "" Sep 21 10:53:44.465: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:53:44.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7773" for this suite. • [SLOW TEST:20.887 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":104,"skipped":1836,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:53:44.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-ae481795-62b6-489e-8a0f-d880fd686465 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ae481795-62b6-489e-8a0f-d880fd686465 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:53:52.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-929" for this suite. • [SLOW TEST:8.233 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":105,"skipped":1840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:53:52.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:53:58.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8194" for this suite. • [SLOW TEST:5.532 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":106,"skipped":1867,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:53:58.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 21 10:53:58.381: INFO: Waiting up to 5m0s for pod "downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a" in namespace "downward-api-2446" to be "Succeeded or Failed" Sep 21 10:53:58.416: INFO: Pod "downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.741015ms Sep 21 10:54:00.524: INFO: Pod "downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142544789s Sep 21 10:54:02.534: INFO: Pod "downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152339818s STEP: Saw pod success Sep 21 10:54:02.534: INFO: Pod "downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a" satisfied condition "Succeeded or Failed" Sep 21 10:54:02.543: INFO: Trying to get logs from node kali-worker2 pod downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a container dapi-container: STEP: delete the pod Sep 21 10:54:02.603: INFO: Waiting for pod downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a to disappear Sep 21 10:54:02.607: INFO: Pod downward-api-a4a32fd8-14b7-4b2c-ab18-8b8ff1295d7a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:54:02.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2446" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":107,"skipped":1872,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:54:02.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 21 10:54:02.705: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:05.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4350" for this suite. • [SLOW TEST:123.265 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":108,"skipped":1885,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:05.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bb60d0ae-b023-4020-b5fa-26605368dbc5 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bb60d0ae-b023-4020-b5fa-26605368dbc5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4320" for this suite. • [SLOW TEST:6.201 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":1886,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:12.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-bb95a27f-8e5a-4f29-ad41-95fd898addf7 STEP: Creating a pod to test consume secrets Sep 21 10:56:12.183: INFO: Waiting up to 5m0s for pod "pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03" in namespace "secrets-1700" to be "Succeeded or Failed" Sep 21 10:56:12.206: INFO: Pod "pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03": Phase="Pending", Reason="", readiness=false. Elapsed: 22.736745ms Sep 21 10:56:14.214: INFO: Pod "pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03059508s Sep 21 10:56:16.228: INFO: Pod "pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04513457s STEP: Saw pod success Sep 21 10:56:16.229: INFO: Pod "pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03" satisfied condition "Succeeded or Failed" Sep 21 10:56:16.233: INFO: Trying to get logs from node kali-worker pod pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03 container secret-volume-test: STEP: delete the pod Sep 21 10:56:16.302: INFO: Waiting for pod pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03 to disappear Sep 21 10:56:16.356: INFO: Pod pod-secrets-3e8c1289-a848-4db7-ba69-f7cbe2563e03 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:16.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1700" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":110,"skipped":1891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:16.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:56:16.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052" in namespace "projected-5814" to be "Succeeded or Failed" Sep 21 10:56:16.792: INFO: Pod "downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052": Phase="Pending", Reason="", readiness=false. Elapsed: 39.038128ms Sep 21 10:56:18.816: INFO: Pod "downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062710162s Sep 21 10:56:20.828: INFO: Pod "downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075051452s STEP: Saw pod success Sep 21 10:56:20.829: INFO: Pod "downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052" satisfied condition "Succeeded or Failed" Sep 21 10:56:20.836: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052 container client-container: STEP: delete the pod Sep 21 10:56:20.868: INFO: Waiting for pod downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052 to disappear Sep 21 10:56:20.880: INFO: Pod downwardapi-volume-aa473d5e-df48-4fc8-bf40-e7c32b4be052 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:20.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5814" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1919,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:20.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-963/configmap-test-cb3b36e1-954e-4ceb-8dd3-eb6ae5fe0cc6 STEP: Creating a pod to test consume configMaps Sep 21 10:56:21.021: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a" in namespace "configmap-963" to be "Succeeded or Failed" Sep 21 10:56:21.052: INFO: Pod "pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.282333ms Sep 21 10:56:23.141: INFO: Pod "pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119567065s Sep 21 10:56:25.149: INFO: Pod "pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127771444s STEP: Saw pod success Sep 21 10:56:25.150: INFO: Pod "pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a" satisfied condition "Succeeded or Failed" Sep 21 10:56:25.156: INFO: Trying to get logs from node kali-worker pod pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a container env-test: STEP: delete the pod Sep 21 10:56:25.271: INFO: Waiting for pod pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a to disappear Sep 21 10:56:25.275: INFO: Pod pod-configmaps-fd0726e3-c230-4b16-88a6-e6fb843f3e9a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:25.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-963" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1937,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:25.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:56:25.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc" in namespace "projected-584" to be "Succeeded or Failed" Sep 21 10:56:25.391: INFO: Pod "downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.854995ms Sep 21 10:56:27.400: INFO: Pod "downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02159277s Sep 21 10:56:29.406: INFO: Pod "downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028281248s STEP: Saw pod success Sep 21 10:56:29.407: INFO: Pod "downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc" satisfied condition "Succeeded or Failed" Sep 21 10:56:29.412: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc container client-container: STEP: delete the pod Sep 21 10:56:29.448: INFO: Waiting for pod downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc to disappear Sep 21 10:56:29.456: INFO: Pod downwardapi-volume-24345489-1792-47f6-b402-b35d036904cc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:29.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-584" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1951,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:29.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Sep 21 10:56:29.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6180' Sep 21 10:56:35.705: INFO: stderr: "" Sep 21 10:56:35.706: INFO: stdout: "pod/pause created\n" Sep 21 10:56:35.706: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 21 10:56:35.706: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6180" to be "running and ready" Sep 21 10:56:35.721: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.305771ms Sep 21 10:56:37.729: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022632644s Sep 21 10:56:39.736: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.030169199s Sep 21 10:56:39.737: INFO: Pod "pause" satisfied condition "running and ready" Sep 21 10:56:39.737: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 21 10:56:39.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6180' Sep 21 10:56:40.937: INFO: stderr: "" Sep 21 10:56:40.937: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 21 10:56:40.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6180' Sep 21 10:56:42.193: INFO: stderr: "" Sep 21 10:56:42.194: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 21 10:56:42.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6180' Sep 21 10:56:43.474: INFO: stderr: "" Sep 21 10:56:43.474: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 21 10:56:43.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6180' Sep 21 10:56:44.720: INFO: stderr: "" Sep 21 10:56:44.720: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Sep 21 10:56:44.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6180' Sep 21 10:56:45.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 10:56:45.988: INFO: stdout: "pod \"pause\" force deleted\n" Sep 21 10:56:45.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6180' Sep 21 10:56:47.228: INFO: stderr: "No resources found in kubectl-6180 namespace.\n" Sep 21 10:56:47.229: INFO: stdout: "" Sep 21 10:56:47.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6180 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 21 10:56:48.401: INFO: stderr: "" Sep 21 10:56:48.401: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:56:48.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6180" for this suite. • [SLOW TEST:18.949 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":114,"skipped":1952,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:56:48.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3515 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 21 10:56:48.541: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 21 10:56:48.658: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 10:56:50.692: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 10:56:52.679: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:56:54.666: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:56:56.665: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:56:58.666: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:57:00.667: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 10:57:02.667: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 21 10:57:02.679: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 21 10:57:04.689: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 21 10:57:08.837: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.124:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3515 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 10:57:08.837: INFO: >>> kubeConfig: /root/.kube/config I0921 10:57:08.939636 10 log.go:181] (0x8750e70) (0x8751a40) Create stream I0921 10:57:08.939811 10 log.go:181] (0x8750e70) (0x8751a40) Stream added, broadcasting: 1 I0921 10:57:08.944665 10 log.go:181] (0x8750e70) Reply frame received for 1 I0921 10:57:08.944972 10 log.go:181] (0x8750e70) (0x9182310) Create stream I0921 10:57:08.945124 10 log.go:181] (0x8750e70) (0x9182310) Stream added, broadcasting: 3 I0921 10:57:08.947312 10 log.go:181] (0x8750e70) Reply frame received for 3 I0921 10:57:08.947470 10 log.go:181] (0x8750e70) (0x71c9650) Create stream I0921 10:57:08.947579 10 log.go:181] (0x8750e70) (0x71c9650) Stream added, broadcasting: 5 I0921 10:57:08.949306 10 log.go:181] (0x8750e70) Reply frame received for 5 I0921 10:57:09.002437 10 log.go:181] (0x8750e70) Data frame received for 3 I0921 10:57:09.002662 10 log.go:181] (0x9182310) (3) Data frame handling I0921 10:57:09.002818 10 log.go:181] (0x8750e70) Data frame received for 5 I0921 10:57:09.002993 10 log.go:181] (0x71c9650) (5) Data frame handling I0921 10:57:09.003095 10 log.go:181] (0x9182310) (3) Data frame sent I0921 10:57:09.003228 10 log.go:181] (0x8750e70) Data frame received for 3 I0921 10:57:09.003348 10 log.go:181] (0x9182310) (3) Data frame handling I0921 10:57:09.004254 10 log.go:181] (0x8750e70) Data frame received for 1 I0921 10:57:09.004406 10 log.go:181] (0x8751a40) (1) Data frame handling I0921 10:57:09.004554 10 log.go:181] (0x8751a40) (1) Data frame sent I0921 10:57:09.004709 10 log.go:181] (0x8750e70) (0x8751a40) Stream removed, broadcasting: 1 I0921 10:57:09.004939 10 log.go:181] (0x8750e70) Go away received I0921 10:57:09.005424 10 log.go:181] (0x8750e70) (0x8751a40) Stream removed, broadcasting: 1 I0921 10:57:09.005596 10 log.go:181] (0x8750e70) (0x9182310) Stream removed, broadcasting: 3 I0921 10:57:09.005735 10 log.go:181] (0x8750e70) (0x71c9650) Stream removed, broadcasting: 5 Sep 21 10:57:09.005: INFO: Found all expected endpoints: [netserver-0] Sep 21 10:57:09.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.164:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3515 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 10:57:09.011: INFO: >>> kubeConfig: /root/.kube/config I0921 10:57:09.119073 10 log.go:181] (0x8bcc230) (0x8bcc310) Create stream I0921 10:57:09.119248 10 log.go:181] (0x8bcc230) (0x8bcc310) Stream added, broadcasting: 1 I0921 10:57:09.123605 10 log.go:181] (0x8bcc230) Reply frame received for 1 I0921 10:57:09.123817 10 log.go:181] (0x8bcc230) (0xafd22a0) Create stream I0921 10:57:09.123915 10 log.go:181] (0x8bcc230) (0xafd22a0) Stream added, broadcasting: 3 I0921 10:57:09.125632 10 log.go:181] (0x8bcc230) Reply frame received for 3 I0921 10:57:09.125828 10 log.go:181] (0x8bcc230) (0x8bcc770) Create stream I0921 10:57:09.125920 10 log.go:181] (0x8bcc230) (0x8bcc770) Stream added, broadcasting: 5 I0921 10:57:09.127259 10 log.go:181] (0x8bcc230) Reply frame received for 5 I0921 10:57:09.182817 10 log.go:181] (0x8bcc230) Data frame received for 5 I0921 10:57:09.183000 10 log.go:181] (0x8bcc770) (5) Data frame handling I0921 10:57:09.183186 10 log.go:181] (0x8bcc230) Data frame received for 3 I0921 10:57:09.183416 10 log.go:181] (0xafd22a0) (3) Data frame handling I0921 10:57:09.183582 10 log.go:181] (0xafd22a0) (3) Data frame sent I0921 10:57:09.183698 10 log.go:181] (0x8bcc230) Data frame received for 3 I0921 10:57:09.183841 10 log.go:181] (0xafd22a0) (3) Data frame handling I0921 10:57:09.184829 10 log.go:181] (0x8bcc230) Data frame received for 1 I0921 10:57:09.184961 10 log.go:181] (0x8bcc310) (1) Data frame handling I0921 10:57:09.185071 10 log.go:181] (0x8bcc310) (1) Data frame sent I0921 10:57:09.185186 10 log.go:181] (0x8bcc230) (0x8bcc310) Stream removed, broadcasting: 1 I0921 10:57:09.185338 10 log.go:181] (0x8bcc230) Go away received I0921 10:57:09.185610 10 log.go:181] (0x8bcc230) (0x8bcc310) Stream removed, broadcasting: 1 I0921 10:57:09.185708 10 log.go:181] (0x8bcc230) (0xafd22a0) Stream removed, broadcasting: 3 I0921 10:57:09.185801 10 log.go:181] (0x8bcc230) (0x8bcc770) Stream removed, broadcasting: 5 Sep 21 10:57:09.185: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:57:09.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3515" for this suite. • [SLOW TEST:20.779 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1952,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:57:09.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 21 10:57:09.291: INFO: Waiting up to 5m0s for pod "downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9" in namespace "downward-api-9256" to be "Succeeded or Failed" Sep 21 10:57:09.309: INFO: Pod "downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.530708ms Sep 21 10:57:11.316: INFO: Pod "downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025221284s Sep 21 10:57:13.325: INFO: Pod "downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033900932s STEP: Saw pod success Sep 21 10:57:13.325: INFO: Pod "downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9" satisfied condition "Succeeded or Failed" Sep 21 10:57:13.331: INFO: Trying to get logs from node kali-worker2 pod downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9 container dapi-container: STEP: delete the pod Sep 21 10:57:13.376: INFO: Waiting for pod downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9 to disappear Sep 21 10:57:13.402: INFO: Pod downward-api-3c5ea3e3-0d48-40b6-b979-7eec0357b9f9 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:57:13.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9256" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":1959,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:57:13.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 21 10:57:18.548: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:57:19.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2817" for this suite. • [SLOW TEST:6.173 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":117,"skipped":1964,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:57:19.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6941.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6941.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6941.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6941.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6941.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 10:57:25.776: INFO: DNS probes using dns-6941/dns-test-fcbb8f4f-d003-4a37-a509-9ed6560b522d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:57:25.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6941" for this suite. • [SLOW TEST:6.644 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":118,"skipped":1983,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:57:26.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 21 10:57:26.624: INFO: Waiting up to 5m0s for pod "pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9" in namespace "emptydir-8873" to be "Succeeded or Failed" Sep 21 10:57:26.682: INFO: Pod "pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.045851ms Sep 21 10:57:28.688: INFO: Pod "pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064246565s Sep 21 10:57:30.770: INFO: Pod "pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145725051s Sep 21 10:57:32.777: INFO: Pod "pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15314611s STEP: Saw pod success Sep 21 10:57:32.778: INFO: Pod "pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9" satisfied condition "Succeeded or Failed" Sep 21 10:57:32.784: INFO: Trying to get logs from node kali-worker pod pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9 container test-container: STEP: delete the pod Sep 21 10:57:32.806: INFO: Waiting for pod pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9 to disappear Sep 21 10:57:32.810: INFO: Pod pod-648aee60-e9e2-4b2d-8f73-6b1eca4267e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:57:32.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8873" for this suite. • [SLOW TEST:6.586 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":1983,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:57:32.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2ebec919-48a9-4a4f-ba8a-267717c7b28c STEP: Creating a pod to test consume configMaps Sep 21 10:57:32.946: INFO: Waiting up to 5m0s for pod "pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8" in namespace "configmap-4981" to be "Succeeded or Failed" Sep 21 10:57:32.969: INFO: Pod "pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.237812ms Sep 21 10:57:34.977: INFO: Pod "pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031301953s Sep 21 10:57:36.987: INFO: Pod "pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040623158s STEP: Saw pod success Sep 21 10:57:36.987: INFO: Pod "pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8" satisfied condition "Succeeded or Failed" Sep 21 10:57:36.994: INFO: Trying to get logs from node kali-worker pod pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8 container configmap-volume-test: STEP: delete the pod Sep 21 10:57:37.029: INFO: Waiting for pod pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8 to disappear Sep 21 10:57:37.038: INFO: Pod pod-configmaps-77fcb578-01c8-480e-bf15-9379ef7d8ba8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:57:37.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4981" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:57:37.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-743123f3-6c4f-4076-a393-73bea8fcb634 STEP: Creating configMap with name cm-test-opt-upd-9156f533-b7e2-4cd9-a279-e4b6f3f169f1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-743123f3-6c4f-4076-a393-73bea8fcb634 STEP: Updating configmap cm-test-opt-upd-9156f533-b7e2-4cd9-a279-e4b6f3f169f1 STEP: Creating configMap with name cm-test-opt-create-d007b3bc-78fe-4c9b-83ff-e40ffb08d663 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:01.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2675" for this suite. • [SLOW TEST:84.940 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":2035,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:01.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 10:59:02.133: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be" in namespace "downward-api-7010" to be "Succeeded or Failed" Sep 21 10:59:02.150: INFO: Pod "downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be": Phase="Pending", Reason="", readiness=false. Elapsed: 15.945568ms Sep 21 10:59:04.178: INFO: Pod "downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044366561s Sep 21 10:59:06.187: INFO: Pod "downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053188327s STEP: Saw pod success Sep 21 10:59:06.187: INFO: Pod "downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be" satisfied condition "Succeeded or Failed" Sep 21 10:59:06.192: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be container client-container: STEP: delete the pod Sep 21 10:59:06.292: INFO: Waiting for pod downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be to disappear Sep 21 10:59:06.297: INFO: Pod downwardapi-volume-5043b7ff-3c85-448d-95a9-1affd55c92be no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:06.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7010" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":2046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:06.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-57bd62db-f3b5-485e-a776-7acd659b043a STEP: Creating a pod to test consume secrets Sep 21 10:59:06.422: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d" in namespace "projected-2042" to be "Succeeded or Failed" Sep 21 10:59:06.431: INFO: Pod "pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009139ms Sep 21 10:59:09.529: INFO: Pod "pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.106804348s Sep 21 10:59:11.539: INFO: Pod "pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.117585848s Sep 21 10:59:13.562: INFO: Pod "pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.140695025s STEP: Saw pod success Sep 21 10:59:13.563: INFO: Pod "pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d" satisfied condition "Succeeded or Failed" Sep 21 10:59:13.568: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d container projected-secret-volume-test: STEP: delete the pod Sep 21 10:59:13.670: INFO: Waiting for pod pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d to disappear Sep 21 10:59:13.763: INFO: Pod pod-projected-secrets-9e446f13-bd3b-4ec9-b076-fa16d631ad7d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:13.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2042" for this suite. • [SLOW TEST:7.510 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2069,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:13.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-4f324fde-8b73-4c67-9ef9-e9b94fd23b14 STEP: Creating a pod to test consume configMaps Sep 21 10:59:13.967: INFO: Waiting up to 5m0s for pod "pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763" in namespace "configmap-4497" to be "Succeeded or Failed" Sep 21 10:59:13.992: INFO: Pod "pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763": Phase="Pending", Reason="", readiness=false. Elapsed: 24.361638ms Sep 21 10:59:16.025: INFO: Pod "pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057685892s Sep 21 10:59:18.064: INFO: Pod "pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097060394s STEP: Saw pod success Sep 21 10:59:18.065: INFO: Pod "pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763" satisfied condition "Succeeded or Failed" Sep 21 10:59:18.083: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763 container configmap-volume-test: STEP: delete the pod Sep 21 10:59:18.125: INFO: Waiting for pod pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763 to disappear Sep 21 10:59:18.154: INFO: Pod pod-configmaps-260006e2-045d-4c92-99b5-561b20fa9763 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:18.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4497" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":2069,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:18.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 21 10:59:22.883: INFO: Successfully updated pod "labelsupdateb74afe5f-ae3f-47bc-9003-96baf6dd7795" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:26.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2622" for this suite. • [SLOW TEST:8.779 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:26.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:38.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3626" for this suite. • [SLOW TEST:11.286 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":126,"skipped":2101,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:38.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 10:59:46.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 10:59:48.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282786, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282786, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282786, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282786, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 10:59:51.681: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 10:59:51.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8394-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 10:59:52.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9500" for this suite. STEP: Destroying namespace "webhook-9500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.834 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":127,"skipped":2111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 10:59:53.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 21 11:00:07.014: INFO: starting watch STEP: patching STEP: updating Sep 21 11:00:07.037: INFO: waiting for watch events with expected annotations Sep 21 11:00:07.038: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:07.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8695" for this suite. • [SLOW TEST:14.233 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":128,"skipped":2147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:07.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d803815f-5b6d-4dc7-a1df-0c5b432d914d STEP: Creating a pod to test consume configMaps Sep 21 11:00:07.445: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6" in namespace "projected-3713" to be "Succeeded or Failed" Sep 21 11:00:07.532: INFO: Pod "pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6": Phase="Pending", Reason="", readiness=false. Elapsed: 86.798829ms Sep 21 11:00:09.562: INFO: Pod "pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117100892s Sep 21 11:00:11.570: INFO: Pod "pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.124951347s Sep 21 11:00:13.580: INFO: Pod "pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134765004s STEP: Saw pod success Sep 21 11:00:13.580: INFO: Pod "pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6" satisfied condition "Succeeded or Failed" Sep 21 11:00:13.585: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6 container projected-configmap-volume-test: STEP: delete the pod Sep 21 11:00:13.627: INFO: Waiting for pod pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6 to disappear Sep 21 11:00:13.655: INFO: Pod pod-projected-configmaps-5e154dac-d460-42b8-b89b-a99006a5bfb6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:13.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3713" for this suite. • [SLOW TEST:6.348 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":2172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:13.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 21 11:00:13.806: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8390 /api/v1/namespaces/watch-8390/configmaps/e2e-watch-test-watch-closed 856277ee-62db-4655-a9c3-f455ec1cc139 2060795 0 2020-09-21 11:00:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-21 11:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:00:13.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8390 /api/v1/namespaces/watch-8390/configmaps/e2e-watch-test-watch-closed 856277ee-62db-4655-a9c3-f455ec1cc139 2060796 0 2020-09-21 11:00:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-21 11:00:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 21 11:00:13.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8390 /api/v1/namespaces/watch-8390/configmaps/e2e-watch-test-watch-closed 856277ee-62db-4655-a9c3-f455ec1cc139 2060797 0 2020-09-21 11:00:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-21 11:00:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:00:13.831: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8390 /api/v1/namespaces/watch-8390/configmaps/e2e-watch-test-watch-closed 856277ee-62db-4655-a9c3-f455ec1cc139 2060798 0 2020-09-21 11:00:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-21 11:00:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:13.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8390" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":130,"skipped":2222,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:13.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:14.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9030" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":131,"skipped":2232,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:14.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:14.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7161" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":132,"skipped":2251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:14.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:14.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9865" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":133,"skipped":2279,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:14.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-9766 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9766 STEP: Deleting pre-stop pod Sep 21 11:00:27.644: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:27.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9766" for this suite. • [SLOW TEST:13.285 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":134,"skipped":2291,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:27.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:00:27.781: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6" in namespace "projected-1436" to be "Succeeded or Failed" Sep 21 11:00:27.993: INFO: Pod "downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6": Phase="Pending", Reason="", readiness=false. Elapsed: 212.054865ms Sep 21 11:00:30.078: INFO: Pod "downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297186839s Sep 21 11:00:32.085: INFO: Pod "downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.304532025s STEP: Saw pod success Sep 21 11:00:32.086: INFO: Pod "downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6" satisfied condition "Succeeded or Failed" Sep 21 11:00:32.090: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6 container client-container: STEP: delete the pod Sep 21 11:00:32.389: INFO: Waiting for pod downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6 to disappear Sep 21 11:00:32.397: INFO: Pod downwardapi-volume-9391db26-97c5-4e16-9dd6-e05db45dccc6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:00:32.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1436" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2291,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:00:32.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-135 Sep 21 11:00:36.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-135 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 21 11:00:38.007: INFO: stderr: "I0921 11:00:37.888550 2296 log.go:181] (0x25ca0e0) (0x25ca150) Create stream\nI0921 11:00:37.892492 2296 log.go:181] (0x25ca0e0) (0x25ca150) Stream added, broadcasting: 1\nI0921 11:00:37.900117 2296 log.go:181] (0x25ca0e0) Reply frame received for 1\nI0921 11:00:37.900594 2296 log.go:181] (0x25ca0e0) (0x2c58230) Create stream\nI0921 11:00:37.900657 2296 log.go:181] (0x25ca0e0) (0x2c58230) Stream added, broadcasting: 3\nI0921 11:00:37.901755 2296 log.go:181] (0x25ca0e0) Reply frame received for 3\nI0921 11:00:37.901945 2296 log.go:181] (0x25ca0e0) (0x2c585b0) Create stream\nI0921 11:00:37.901997 2296 log.go:181] (0x25ca0e0) (0x2c585b0) Stream added, broadcasting: 5\nI0921 11:00:37.903050 2296 log.go:181] (0x25ca0e0) Reply frame received for 5\nI0921 11:00:37.981631 2296 log.go:181] (0x25ca0e0) Data frame received for 5\nI0921 11:00:37.981927 2296 log.go:181] (0x2c585b0) (5) Data frame handling\nI0921 11:00:37.982539 2296 log.go:181] (0x2c585b0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0921 11:00:37.989919 2296 log.go:181] (0x25ca0e0) Data frame received for 3\nI0921 11:00:37.990194 2296 log.go:181] (0x2c58230) (3) Data frame handling\nI0921 11:00:37.990707 2296 log.go:181] (0x2c58230) (3) Data frame sent\nI0921 11:00:37.991165 2296 log.go:181] (0x25ca0e0) Data frame received for 3\nI0921 11:00:37.991315 2296 log.go:181] (0x25ca0e0) Data frame received for 5\nI0921 11:00:37.991519 2296 log.go:181] (0x2c585b0) (5) Data frame handling\nI0921 11:00:37.993882 2296 log.go:181] (0x25ca0e0) Data frame received for 1\nI0921 11:00:37.994448 2296 log.go:181] (0x2c58230) (3) Data frame handling\nI0921 11:00:37.994601 2296 log.go:181] (0x25ca150) (1) Data frame handling\nI0921 11:00:37.994772 2296 log.go:181] (0x25ca150) (1) Data frame sent\nI0921 11:00:37.995072 2296 log.go:181] (0x25ca0e0) (0x25ca150) Stream removed, broadcasting: 1\nI0921 11:00:37.995490 2296 log.go:181] (0x25ca0e0) Go away received\nI0921 11:00:37.998726 2296 log.go:181] (0x25ca0e0) (0x25ca150) Stream removed, broadcasting: 1\nI0921 11:00:37.998968 2296 log.go:181] (0x25ca0e0) (0x2c58230) Stream removed, broadcasting: 3\nI0921 11:00:37.999166 2296 log.go:181] (0x25ca0e0) (0x2c585b0) Stream removed, broadcasting: 5\n" Sep 21 11:00:38.007: INFO: stdout: "iptables" Sep 21 11:00:38.007: INFO: proxyMode: iptables Sep 21 11:00:38.014: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 11:00:38.107: INFO: Pod kube-proxy-mode-detector still exists Sep 21 11:00:40.107: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 21 11:00:40.113: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-135 STEP: creating replication controller affinity-clusterip-timeout in namespace services-135 I0921 11:00:40.177608 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-135, replica count: 3 I0921 11:00:43.229229 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:00:46.230151 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 11:00:46.241: INFO: Creating new exec pod Sep 21 11:00:51.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-135 execpod-affinityx79n6 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 21 11:00:52.870: INFO: stderr: "I0921 11:00:52.740529 2316 log.go:181] (0x2f99110) (0x2f99180) Create stream\nI0921 11:00:52.745658 2316 log.go:181] (0x2f99110) (0x2f99180) Stream added, broadcasting: 1\nI0921 11:00:52.758391 2316 log.go:181] (0x2f99110) Reply frame received for 1\nI0921 11:00:52.759352 2316 log.go:181] (0x2f99110) (0x318e070) Create stream\nI0921 11:00:52.759490 2316 log.go:181] (0x2f99110) (0x318e070) Stream added, broadcasting: 3\nI0921 11:00:52.761382 2316 log.go:181] (0x2f99110) Reply frame received for 3\nI0921 11:00:52.761603 2316 log.go:181] (0x2f99110) (0x2d9e070) Create stream\nI0921 11:00:52.761661 2316 log.go:181] (0x2f99110) (0x2d9e070) Stream added, broadcasting: 5\nI0921 11:00:52.762883 2316 log.go:181] (0x2f99110) Reply frame received for 5\nI0921 11:00:52.851301 2316 log.go:181] (0x2f99110) Data frame received for 3\nI0921 11:00:52.851538 2316 log.go:181] (0x318e070) (3) Data frame handling\nI0921 11:00:52.851663 2316 log.go:181] (0x2f99110) Data frame received for 5\nI0921 11:00:52.851834 2316 log.go:181] (0x2d9e070) (5) Data frame handling\nI0921 11:00:52.852304 2316 log.go:181] (0x2f99110) Data frame received for 1\nI0921 11:00:52.852410 2316 log.go:181] (0x2f99180) (1) Data frame handling\nI0921 11:00:52.853339 2316 log.go:181] (0x2d9e070) (5) Data frame sent\nI0921 11:00:52.853605 2316 log.go:181] (0x2f99180) (1) Data frame sent\nI0921 11:00:52.853957 2316 log.go:181] (0x2f99110) Data frame received for 5\nI0921 11:00:52.854089 2316 log.go:181] (0x2d9e070) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0921 11:00:52.855342 2316 log.go:181] (0x2f99110) (0x2f99180) Stream removed, broadcasting: 1\nI0921 11:00:52.856949 2316 log.go:181] (0x2f99110) Go away received\nI0921 11:00:52.860575 2316 log.go:181] (0x2f99110) (0x2f99180) Stream removed, broadcasting: 1\nI0921 11:00:52.860885 2316 log.go:181] (0x2f99110) (0x318e070) Stream removed, broadcasting: 3\nI0921 11:00:52.861105 2316 log.go:181] (0x2f99110) (0x2d9e070) Stream removed, broadcasting: 5\n" Sep 21 11:00:52.871: INFO: stdout: "" Sep 21 11:00:52.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-135 execpod-affinityx79n6 -- /bin/sh -x -c nc -zv -t -w 2 10.102.51.192 80' Sep 21 11:00:54.361: INFO: stderr: "I0921 11:00:54.253789 2336 log.go:181] (0x2e30000) (0x2e30070) Create stream\nI0921 11:00:54.258726 2336 log.go:181] (0x2e30000) (0x2e30070) Stream added, broadcasting: 1\nI0921 11:00:54.267879 2336 log.go:181] (0x2e30000) Reply frame received for 1\nI0921 11:00:54.268882 2336 log.go:181] (0x2e30000) (0x28f63f0) Create stream\nI0921 11:00:54.268995 2336 log.go:181] (0x2e30000) (0x28f63f0) Stream added, broadcasting: 3\nI0921 11:00:54.271239 2336 log.go:181] (0x2e30000) Reply frame received for 3\nI0921 11:00:54.271727 2336 log.go:181] (0x2e30000) (0x24ac850) Create stream\nI0921 11:00:54.271854 2336 log.go:181] (0x2e30000) (0x24ac850) Stream added, broadcasting: 5\nI0921 11:00:54.273797 2336 log.go:181] (0x2e30000) Reply frame received for 5\nI0921 11:00:54.342353 2336 log.go:181] (0x2e30000) Data frame received for 3\nI0921 11:00:54.342733 2336 log.go:181] (0x28f63f0) (3) Data frame handling\nI0921 11:00:54.343002 2336 log.go:181] (0x2e30000) Data frame received for 5\nI0921 11:00:54.343247 2336 log.go:181] (0x24ac850) (5) Data frame handling\nI0921 11:00:54.343654 2336 log.go:181] (0x2e30000) Data frame received for 1\nI0921 11:00:54.343884 2336 log.go:181] (0x2e30070) (1) Data frame handling\nI0921 11:00:54.345089 2336 log.go:181] (0x24ac850) (5) Data frame sent\nI0921 11:00:54.345254 2336 log.go:181] (0x2e30000) Data frame received for 5\nI0921 11:00:54.345354 2336 log.go:181] (0x24ac850) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.51.192 80\nConnection to 10.102.51.192 80 port [tcp/http] succeeded!\nI0921 11:00:54.345657 2336 log.go:181] (0x2e30070) (1) Data frame sent\nI0921 11:00:54.346514 2336 log.go:181] (0x2e30000) (0x2e30070) Stream removed, broadcasting: 1\nI0921 11:00:54.348517 2336 log.go:181] (0x2e30000) Go away received\nI0921 11:00:54.351681 2336 log.go:181] (0x2e30000) (0x2e30070) Stream removed, broadcasting: 1\nI0921 11:00:54.352067 2336 log.go:181] (0x2e30000) (0x28f63f0) Stream removed, broadcasting: 3\nI0921 11:00:54.352386 2336 log.go:181] (0x2e30000) (0x24ac850) Stream removed, broadcasting: 5\n" Sep 21 11:00:54.362: INFO: stdout: "" Sep 21 11:00:54.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-135 execpod-affinityx79n6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.51.192:80/ ; done' Sep 21 11:00:55.972: INFO: stderr: "I0921 11:00:55.746779 2356 log.go:181] (0x29d0000) (0x29d0070) Create stream\nI0921 11:00:55.750115 2356 log.go:181] (0x29d0000) (0x29d0070) Stream added, broadcasting: 1\nI0921 11:00:55.760511 2356 log.go:181] (0x29d0000) Reply frame received for 1\nI0921 11:00:55.761267 2356 log.go:181] (0x29d0000) (0x29d02a0) Create stream\nI0921 11:00:55.761366 2356 log.go:181] (0x29d0000) (0x29d02a0) Stream added, broadcasting: 3\nI0921 11:00:55.763182 2356 log.go:181] (0x29d0000) Reply frame received for 3\nI0921 11:00:55.763651 2356 log.go:181] (0x29d0000) (0x25bf1f0) Create stream\nI0921 11:00:55.763756 2356 log.go:181] (0x29d0000) (0x25bf1f0) Stream added, broadcasting: 5\nI0921 11:00:55.765896 2356 log.go:181] (0x29d0000) Reply frame received for 5\nI0921 11:00:55.861484 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.861795 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.862284 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.862680 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.862922 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.863300 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.867794 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.867917 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.868045 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.868868 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.869000 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.869159 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.869301 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.869413 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.869538 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.873146 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.873215 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.873290 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.873991 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.874117 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.874246 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.874421 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.874542 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.874676 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.879113 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.879206 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.879277 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.879894 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.879970 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.880087 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.880329 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.880453 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.880577 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.884073 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.884231 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.884342 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.884802 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.884899 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.884988 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.885062 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.885136 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.885228 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.890085 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.890186 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.890301 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.890850 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.890966 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.891059 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.891172 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.891257 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.891335 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.895744 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.895830 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.895940 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.896384 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.896493 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.896591 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.896705 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.896795 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.896878 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.900514 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.900638 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.900762 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.901402 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.901563 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.901698 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.901814 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.901935 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.902069 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.907027 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.907096 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.907168 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.907864 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.907957 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.908033 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.908104 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.908277 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.908374 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.912693 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.912801 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.912918 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.913999 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.914124 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/I0921 11:00:55.914228 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.914346 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.914447 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.914572 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.914685 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n\nI0921 11:00:55.914792 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.914914 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.918091 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.918222 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.918353 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.918724 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.918872 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.918979 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.919120 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.919211 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.919321 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.924075 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.924347 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.924564 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.924660 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.924805 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.924897 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.925011 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.925112 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.925204 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.930056 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.930165 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.930329 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.931101 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.931235 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.931352 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.931533 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.931681 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.931788 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.935140 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.935294 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.935509 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.936481 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.936616 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.936736 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.936929 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.937101 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.937285 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.942744 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.942877 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.943036 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.943828 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.943983 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.944263 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.944419 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.944507 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.944602 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\nI0921 11:00:55.949339 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.949471 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.949624 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.950178 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.950303 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.950400 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.950497 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.950578 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.950678 2356 log.go:181] (0x25bf1f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:55.954619 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.954739 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.954869 2356 log.go:181] (0x29d02a0) (3) Data frame sent\nI0921 11:00:55.955478 2356 log.go:181] (0x29d0000) Data frame received for 5\nI0921 11:00:55.955589 2356 log.go:181] (0x25bf1f0) (5) Data frame handling\nI0921 11:00:55.955869 2356 log.go:181] (0x29d0000) Data frame received for 3\nI0921 11:00:55.955992 2356 log.go:181] (0x29d02a0) (3) Data frame handling\nI0921 11:00:55.957850 2356 log.go:181] (0x29d0000) Data frame received for 1\nI0921 11:00:55.958009 2356 log.go:181] (0x29d0070) (1) Data frame handling\nI0921 11:00:55.958114 2356 log.go:181] (0x29d0070) (1) Data frame sent\nI0921 11:00:55.958654 2356 log.go:181] (0x29d0000) (0x29d0070) Stream removed, broadcasting: 1\nI0921 11:00:55.961429 2356 log.go:181] (0x29d0000) Go away received\nI0921 11:00:55.963686 2356 log.go:181] (0x29d0000) (0x29d0070) Stream removed, broadcasting: 1\nI0921 11:00:55.963949 2356 log.go:181] (0x29d0000) (0x29d02a0) Stream removed, broadcasting: 3\nI0921 11:00:55.964403 2356 log.go:181] (0x29d0000) (0x25bf1f0) Stream removed, broadcasting: 5\n" Sep 21 11:00:55.978: INFO: stdout: "\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv\naffinity-clusterip-timeout-ggshv" Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.978: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.979: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.979: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.979: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.979: INFO: Received response from host: affinity-clusterip-timeout-ggshv Sep 21 11:00:55.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-135 execpod-affinityx79n6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.51.192:80/' Sep 21 11:00:57.615: INFO: stderr: "I0921 11:00:57.507363 2377 log.go:181] (0x251a770) (0x251ad20) Create stream\nI0921 11:00:57.509767 2377 log.go:181] (0x251a770) (0x251ad20) Stream added, broadcasting: 1\nI0921 11:00:57.521256 2377 log.go:181] (0x251a770) Reply frame received for 1\nI0921 11:00:57.522075 2377 log.go:181] (0x251a770) (0x24ea150) Create stream\nI0921 11:00:57.522173 2377 log.go:181] (0x251a770) (0x24ea150) Stream added, broadcasting: 3\nI0921 11:00:57.524340 2377 log.go:181] (0x251a770) Reply frame received for 3\nI0921 11:00:57.525129 2377 log.go:181] (0x251a770) (0x26822a0) Create stream\nI0921 11:00:57.525260 2377 log.go:181] (0x251a770) (0x26822a0) Stream added, broadcasting: 5\nI0921 11:00:57.527043 2377 log.go:181] (0x251a770) Reply frame received for 5\nI0921 11:00:57.597063 2377 log.go:181] (0x251a770) Data frame received for 5\nI0921 11:00:57.597447 2377 log.go:181] (0x26822a0) (5) Data frame handling\nI0921 11:00:57.598152 2377 log.go:181] (0x26822a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:00:57.599329 2377 log.go:181] (0x251a770) Data frame received for 3\nI0921 11:00:57.599443 2377 log.go:181] (0x24ea150) (3) Data frame handling\nI0921 11:00:57.599582 2377 log.go:181] (0x24ea150) (3) Data frame sent\nI0921 11:00:57.599755 2377 log.go:181] (0x251a770) Data frame received for 5\nI0921 11:00:57.599849 2377 log.go:181] (0x26822a0) (5) Data frame handling\nI0921 11:00:57.600289 2377 log.go:181] (0x251a770) Data frame received for 3\nI0921 11:00:57.600415 2377 log.go:181] (0x24ea150) (3) Data frame handling\nI0921 11:00:57.601918 2377 log.go:181] (0x251a770) Data frame received for 1\nI0921 11:00:57.602043 2377 log.go:181] (0x251ad20) (1) Data frame handling\nI0921 11:00:57.602154 2377 log.go:181] (0x251ad20) (1) Data frame sent\nI0921 11:00:57.602644 2377 log.go:181] (0x251a770) (0x251ad20) Stream removed, broadcasting: 1\nI0921 11:00:57.604310 2377 log.go:181] (0x251a770) Go away received\nI0921 11:00:57.607764 2377 log.go:181] (0x251a770) (0x251ad20) Stream removed, broadcasting: 1\nI0921 11:00:57.608022 2377 log.go:181] (0x251a770) (0x24ea150) Stream removed, broadcasting: 3\nI0921 11:00:57.608325 2377 log.go:181] (0x251a770) (0x26822a0) Stream removed, broadcasting: 5\n" Sep 21 11:00:57.617: INFO: stdout: "affinity-clusterip-timeout-ggshv" Sep 21 11:01:12.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-135 execpod-affinityx79n6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.51.192:80/' Sep 21 11:01:14.121: INFO: stderr: "I0921 11:01:13.983219 2397 log.go:181] (0x2f2a000) (0x2f2a070) Create stream\nI0921 11:01:13.985203 2397 log.go:181] (0x2f2a000) (0x2f2a070) Stream added, broadcasting: 1\nI0921 11:01:13.993008 2397 log.go:181] (0x2f2a000) Reply frame received for 1\nI0921 11:01:13.993437 2397 log.go:181] (0x2f2a000) (0x2f2a310) Create stream\nI0921 11:01:13.993492 2397 log.go:181] (0x2f2a000) (0x2f2a310) Stream added, broadcasting: 3\nI0921 11:01:13.994700 2397 log.go:181] (0x2f2a000) Reply frame received for 3\nI0921 11:01:13.994899 2397 log.go:181] (0x2f2a000) (0x27a4070) Create stream\nI0921 11:01:13.994952 2397 log.go:181] (0x2f2a000) (0x27a4070) Stream added, broadcasting: 5\nI0921 11:01:13.996418 2397 log.go:181] (0x2f2a000) Reply frame received for 5\nI0921 11:01:14.087762 2397 log.go:181] (0x2f2a000) Data frame received for 5\nI0921 11:01:14.088231 2397 log.go:181] (0x27a4070) (5) Data frame handling\nI0921 11:01:14.089112 2397 log.go:181] (0x27a4070) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.51.192:80/\nI0921 11:01:14.093245 2397 log.go:181] (0x2f2a000) Data frame received for 3\nI0921 11:01:14.093478 2397 log.go:181] (0x2f2a310) (3) Data frame handling\nI0921 11:01:14.093668 2397 log.go:181] (0x2f2a310) (3) Data frame sent\nI0921 11:01:14.093839 2397 log.go:181] (0x2f2a000) Data frame received for 3\nI0921 11:01:14.093994 2397 log.go:181] (0x2f2a310) (3) Data frame handling\nI0921 11:01:14.094344 2397 log.go:181] (0x2f2a000) Data frame received for 5\nI0921 11:01:14.094493 2397 log.go:181] (0x27a4070) (5) Data frame handling\nI0921 11:01:14.096467 2397 log.go:181] (0x2f2a000) Data frame received for 1\nI0921 11:01:14.096628 2397 log.go:181] (0x2f2a070) (1) Data frame handling\nI0921 11:01:14.096773 2397 log.go:181] (0x2f2a070) (1) Data frame sent\nI0921 11:01:14.097692 2397 log.go:181] (0x2f2a000) (0x2f2a070) Stream removed, broadcasting: 1\nI0921 11:01:14.099790 2397 log.go:181] (0x2f2a000) Go away received\nI0921 11:01:14.114286 2397 log.go:181] (0x2f2a000) (0x2f2a070) Stream removed, broadcasting: 1\nI0921 11:01:14.114530 2397 log.go:181] (0x2f2a000) (0x2f2a310) Stream removed, broadcasting: 3\nI0921 11:01:14.114739 2397 log.go:181] (0x2f2a000) (0x27a4070) Stream removed, broadcasting: 5\n" Sep 21 11:01:14.122: INFO: stdout: "affinity-clusterip-timeout-tqjsk" Sep 21 11:01:14.122: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-135, will wait for the garbage collector to delete the pods Sep 21 11:01:14.258: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 9.038ms Sep 21 11:01:14.758: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.793206ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:01:23.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-135" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:50.914 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":136,"skipped":2301,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:01:23.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:01:33.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:01:35.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282893, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282893, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282893, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282893, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:01:38.775: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:01:38.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:01:39.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8058" for this suite. STEP: Destroying namespace "webhook-8058-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.858 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":137,"skipped":2313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:01:40.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:01:47.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-892" for this suite. • [SLOW TEST:7.119 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":138,"skipped":2338,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:01:47.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 21 11:01:47.385: INFO: Waiting up to 5m0s for pod "client-containers-191aac41-8498-455a-bdc8-8293c87d06b6" in namespace "containers-8999" to be "Succeeded or Failed" Sep 21 11:01:47.412: INFO: Pod "client-containers-191aac41-8498-455a-bdc8-8293c87d06b6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.436218ms Sep 21 11:01:49.419: INFO: Pod "client-containers-191aac41-8498-455a-bdc8-8293c87d06b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034183043s Sep 21 11:01:51.449: INFO: Pod "client-containers-191aac41-8498-455a-bdc8-8293c87d06b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064435919s STEP: Saw pod success Sep 21 11:01:51.449: INFO: Pod "client-containers-191aac41-8498-455a-bdc8-8293c87d06b6" satisfied condition "Succeeded or Failed" Sep 21 11:01:51.454: INFO: Trying to get logs from node kali-worker2 pod client-containers-191aac41-8498-455a-bdc8-8293c87d06b6 container test-container: STEP: delete the pod Sep 21 11:01:51.506: INFO: Waiting for pod client-containers-191aac41-8498-455a-bdc8-8293c87d06b6 to disappear Sep 21 11:01:51.513: INFO: Pod client-containers-191aac41-8498-455a-bdc8-8293c87d06b6 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:01:51.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8999" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":139,"skipped":2351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:01:51.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 21 11:01:55.644: INFO: Pod pod-hostip-126a19af-a118-42a0-af50-dad5da69aa4b has hostIP: 172.18.0.11 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:01:55.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7055" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2377,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:01:55.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2267 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2267 I0921 11:01:55.885429 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2267, replica count: 2 I0921 11:01:58.937040 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:02:01.938248 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 11:02:01.938: INFO: Creating new exec pod Sep 21 11:02:07.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2267 execpod5c27j -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 21 11:02:08.682: INFO: stderr: "I0921 11:02:08.545406 2417 log.go:181] (0x2c03810) (0x2c03880) Create stream\nI0921 11:02:08.548968 2417 log.go:181] (0x2c03810) (0x2c03880) Stream added, broadcasting: 1\nI0921 11:02:08.556748 2417 log.go:181] (0x2c03810) Reply frame received for 1\nI0921 11:02:08.557488 2417 log.go:181] (0x2c03810) (0x265abd0) Create stream\nI0921 11:02:08.557608 2417 log.go:181] (0x2c03810) (0x265abd0) Stream added, broadcasting: 3\nI0921 11:02:08.559073 2417 log.go:181] (0x2c03810) Reply frame received for 3\nI0921 11:02:08.559368 2417 log.go:181] (0x2c03810) (0x277e620) Create stream\nI0921 11:02:08.559432 2417 log.go:181] (0x2c03810) (0x277e620) Stream added, broadcasting: 5\nI0921 11:02:08.560992 2417 log.go:181] (0x2c03810) Reply frame received for 5\nI0921 11:02:08.665185 2417 log.go:181] (0x2c03810) Data frame received for 3\nI0921 11:02:08.665471 2417 log.go:181] (0x2c03810) Data frame received for 5\nI0921 11:02:08.665676 2417 log.go:181] (0x2c03810) Data frame received for 1\nI0921 11:02:08.665798 2417 log.go:181] (0x2c03880) (1) Data frame handling\nI0921 11:02:08.665988 2417 log.go:181] (0x277e620) (5) Data frame handling\nI0921 11:02:08.666213 2417 log.go:181] (0x265abd0) (3) Data frame handling\nI0921 11:02:08.667405 2417 log.go:181] (0x2c03880) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0921 11:02:08.668329 2417 log.go:181] (0x277e620) (5) Data frame sent\nI0921 11:02:08.668496 2417 log.go:181] (0x2c03810) Data frame received for 5\nI0921 11:02:08.668593 2417 log.go:181] (0x277e620) (5) Data frame handling\nI0921 11:02:08.669362 2417 log.go:181] (0x2c03810) (0x2c03880) Stream removed, broadcasting: 1\nI0921 11:02:08.671756 2417 log.go:181] (0x2c03810) Go away received\nI0921 11:02:08.675490 2417 log.go:181] (0x2c03810) (0x2c03880) Stream removed, broadcasting: 1\nI0921 11:02:08.675736 2417 log.go:181] (0x2c03810) (0x265abd0) Stream removed, broadcasting: 3\nI0921 11:02:08.675953 2417 log.go:181] (0x2c03810) (0x277e620) Stream removed, broadcasting: 5\n" Sep 21 11:02:08.683: INFO: stdout: "" Sep 21 11:02:08.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2267 execpod5c27j -- /bin/sh -x -c nc -zv -t -w 2 10.99.11.145 80' Sep 21 11:02:10.176: INFO: stderr: "I0921 11:02:10.039525 2437 log.go:181] (0x29ba0e0) (0x29ba150) Create stream\nI0921 11:02:10.043631 2437 log.go:181] (0x29ba0e0) (0x29ba150) Stream added, broadcasting: 1\nI0921 11:02:10.052573 2437 log.go:181] (0x29ba0e0) Reply frame received for 1\nI0921 11:02:10.053143 2437 log.go:181] (0x29ba0e0) (0x24dc070) Create stream\nI0921 11:02:10.053215 2437 log.go:181] (0x29ba0e0) (0x24dc070) Stream added, broadcasting: 3\nI0921 11:02:10.054956 2437 log.go:181] (0x29ba0e0) Reply frame received for 3\nI0921 11:02:10.055192 2437 log.go:181] (0x29ba0e0) (0x2aa8070) Create stream\nI0921 11:02:10.055258 2437 log.go:181] (0x29ba0e0) (0x2aa8070) Stream added, broadcasting: 5\nI0921 11:02:10.056780 2437 log.go:181] (0x29ba0e0) Reply frame received for 5\nI0921 11:02:10.160048 2437 log.go:181] (0x29ba0e0) Data frame received for 1\nI0921 11:02:10.160376 2437 log.go:181] (0x29ba0e0) Data frame received for 5\nI0921 11:02:10.160497 2437 log.go:181] (0x29ba150) (1) Data frame handling\nI0921 11:02:10.161257 2437 log.go:181] (0x29ba0e0) Data frame received for 3\nI0921 11:02:10.161414 2437 log.go:181] (0x24dc070) (3) Data frame handling\nI0921 11:02:10.161893 2437 log.go:181] (0x2aa8070) (5) Data frame handling\nI0921 11:02:10.163463 2437 log.go:181] (0x2aa8070) (5) Data frame sent\nI0921 11:02:10.163992 2437 log.go:181] (0x29ba150) (1) Data frame sent\n+ nc -zv -t -w 2 10.99.11.145 80\nConnection to 10.99.11.145 80 port [tcp/http] succeeded!\nI0921 11:02:10.164307 2437 log.go:181] (0x29ba0e0) Data frame received for 5\nI0921 11:02:10.164551 2437 log.go:181] (0x2aa8070) (5) Data frame handling\nI0921 11:02:10.165768 2437 log.go:181] (0x29ba0e0) (0x29ba150) Stream removed, broadcasting: 1\nI0921 11:02:10.166201 2437 log.go:181] (0x29ba0e0) Go away received\nI0921 11:02:10.168681 2437 log.go:181] (0x29ba0e0) (0x29ba150) Stream removed, broadcasting: 1\nI0921 11:02:10.168913 2437 log.go:181] (0x29ba0e0) (0x24dc070) Stream removed, broadcasting: 3\nI0921 11:02:10.169059 2437 log.go:181] (0x29ba0e0) (0x2aa8070) Stream removed, broadcasting: 5\n" Sep 21 11:02:10.178: INFO: stdout: "" Sep 21 11:02:10.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2267 execpod5c27j -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31938' Sep 21 11:02:11.702: INFO: stderr: "I0921 11:02:11.563861 2457 log.go:181] (0x25f24d0) (0x25f28c0) Create stream\nI0921 11:02:11.566491 2457 log.go:181] (0x25f24d0) (0x25f28c0) Stream added, broadcasting: 1\nI0921 11:02:11.582586 2457 log.go:181] (0x25f24d0) Reply frame received for 1\nI0921 11:02:11.587554 2457 log.go:181] (0x25f24d0) (0x299c2a0) Create stream\nI0921 11:02:11.587828 2457 log.go:181] (0x25f24d0) (0x299c2a0) Stream added, broadcasting: 3\nI0921 11:02:11.589281 2457 log.go:181] (0x25f24d0) Reply frame received for 3\nI0921 11:02:11.589540 2457 log.go:181] (0x25f24d0) (0x299c460) Create stream\nI0921 11:02:11.589606 2457 log.go:181] (0x25f24d0) (0x299c460) Stream added, broadcasting: 5\nI0921 11:02:11.590867 2457 log.go:181] (0x25f24d0) Reply frame received for 5\nI0921 11:02:11.683356 2457 log.go:181] (0x25f24d0) Data frame received for 3\nI0921 11:02:11.683703 2457 log.go:181] (0x25f24d0) Data frame received for 1\nI0921 11:02:11.684245 2457 log.go:181] (0x25f24d0) Data frame received for 5\nI0921 11:02:11.684593 2457 log.go:181] (0x299c460) (5) Data frame handling\nI0921 11:02:11.684953 2457 log.go:181] (0x25f28c0) (1) Data frame handling\nI0921 11:02:11.685299 2457 log.go:181] (0x299c2a0) (3) Data frame handling\nI0921 11:02:11.686050 2457 log.go:181] (0x25f28c0) (1) Data frame sent\nI0921 11:02:11.687085 2457 log.go:181] (0x299c460) (5) Data frame sent\nI0921 11:02:11.687498 2457 log.go:181] (0x25f24d0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.11 31938\nConnection to 172.18.0.11 31938 port [tcp/31938] succeeded!\nI0921 11:02:11.687574 2457 log.go:181] (0x299c460) (5) Data frame handling\nI0921 11:02:11.688862 2457 log.go:181] (0x25f24d0) (0x25f28c0) Stream removed, broadcasting: 1\nI0921 11:02:11.689850 2457 log.go:181] (0x25f24d0) Go away received\nI0921 11:02:11.692646 2457 log.go:181] (0x25f24d0) (0x25f28c0) Stream removed, broadcasting: 1\nI0921 11:02:11.692840 2457 log.go:181] (0x25f24d0) (0x299c2a0) Stream removed, broadcasting: 3\nI0921 11:02:11.692966 2457 log.go:181] (0x25f24d0) (0x299c460) Stream removed, broadcasting: 5\n" Sep 21 11:02:11.703: INFO: stdout: "" Sep 21 11:02:11.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2267 execpod5c27j -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31938' Sep 21 11:02:13.270: INFO: stderr: "I0921 11:02:13.122209 2478 log.go:181] (0x2bddc70) (0x2bddce0) Create stream\nI0921 11:02:13.123902 2478 log.go:181] (0x2bddc70) (0x2bddce0) Stream added, broadcasting: 1\nI0921 11:02:13.155666 2478 log.go:181] (0x2bddc70) Reply frame received for 1\nI0921 11:02:13.156353 2478 log.go:181] (0x2bddc70) (0x2bdc070) Create stream\nI0921 11:02:13.156427 2478 log.go:181] (0x2bddc70) (0x2bdc070) Stream added, broadcasting: 3\nI0921 11:02:13.157760 2478 log.go:181] (0x2bddc70) Reply frame received for 3\nI0921 11:02:13.157993 2478 log.go:181] (0x2bddc70) (0x28e8230) Create stream\nI0921 11:02:13.158075 2478 log.go:181] (0x2bddc70) (0x28e8230) Stream added, broadcasting: 5\nI0921 11:02:13.159066 2478 log.go:181] (0x2bddc70) Reply frame received for 5\nI0921 11:02:13.250443 2478 log.go:181] (0x2bddc70) Data frame received for 5\nI0921 11:02:13.250937 2478 log.go:181] (0x2bddc70) Data frame received for 3\nI0921 11:02:13.251144 2478 log.go:181] (0x2bdc070) (3) Data frame handling\nI0921 11:02:13.251289 2478 log.go:181] (0x28e8230) (5) Data frame handling\nI0921 11:02:13.251537 2478 log.go:181] (0x2bddc70) Data frame received for 1\nI0921 11:02:13.251746 2478 log.go:181] (0x2bddce0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31938\nConnection to 172.18.0.12 31938 port [tcp/31938] succeeded!\nI0921 11:02:13.253975 2478 log.go:181] (0x2bddce0) (1) Data frame sent\nI0921 11:02:13.254285 2478 log.go:181] (0x28e8230) (5) Data frame sent\nI0921 11:02:13.254433 2478 log.go:181] (0x2bddc70) Data frame received for 5\nI0921 11:02:13.254546 2478 log.go:181] (0x28e8230) (5) Data frame handling\nI0921 11:02:13.255811 2478 log.go:181] (0x2bddc70) (0x2bddce0) Stream removed, broadcasting: 1\nI0921 11:02:13.257303 2478 log.go:181] (0x2bddc70) Go away received\nI0921 11:02:13.260559 2478 log.go:181] (0x2bddc70) (0x2bddce0) Stream removed, broadcasting: 1\nI0921 11:02:13.261018 2478 log.go:181] (0x2bddc70) (0x2bdc070) Stream removed, broadcasting: 3\nI0921 11:02:13.261214 2478 log.go:181] (0x2bddc70) (0x28e8230) Stream removed, broadcasting: 5\n" Sep 21 11:02:13.270: INFO: stdout: "" Sep 21 11:02:13.271: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:02:13.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2267" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.670 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":141,"skipped":2377,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:02:13.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Sep 21 11:02:13.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-2988 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 21 11:02:14.729: INFO: stderr: "" Sep 21 11:02:14.729: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 21 11:02:14.730: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 21 11:02:14.730: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2988" to be "running and ready, or succeeded" Sep 21 11:02:14.747: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.422345ms Sep 21 11:02:16.756: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025370532s Sep 21 11:02:18.807: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.076656917s Sep 21 11:02:18.807: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 21 11:02:18.808: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 21 11:02:18.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2988' Sep 21 11:02:20.078: INFO: stderr: "" Sep 21 11:02:20.078: INFO: stdout: "I0921 11:02:17.194977 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/5sp 200\nI0921 11:02:17.395187 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/pr2 398\nI0921 11:02:17.595225 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/zkg4 551\nI0921 11:02:17.795189 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/7lh 551\nI0921 11:02:17.995131 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/bh69 405\nI0921 11:02:18.195175 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/dx9d 218\nI0921 11:02:18.395148 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pck 500\nI0921 11:02:18.595147 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5t8z 432\nI0921 11:02:18.795182 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/2rtv 319\nI0921 11:02:18.995131 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/w26w 326\nI0921 11:02:19.195138 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/5hrj 467\nI0921 11:02:19.395118 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/j84 468\nI0921 11:02:19.595142 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/lfpj 596\nI0921 11:02:19.795159 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/xrq 532\nI0921 11:02:19.995113 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/6f6 209\n" STEP: limiting log lines Sep 21 11:02:20.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2988 --tail=1' Sep 21 11:02:21.356: INFO: stderr: "" Sep 21 11:02:21.356: INFO: stdout: "I0921 11:02:21.195123 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/g26 306\n" Sep 21 11:02:21.356: INFO: got output "I0921 11:02:21.195123 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/g26 306\n" STEP: limiting log bytes Sep 21 11:02:21.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2988 --limit-bytes=1' Sep 21 11:02:22.754: INFO: stderr: "" Sep 21 11:02:22.755: INFO: stdout: "I" Sep 21 11:02:22.755: INFO: got output "I" STEP: exposing timestamps Sep 21 11:02:22.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2988 --tail=1 --timestamps' Sep 21 11:02:24.016: INFO: stderr: "" Sep 21 11:02:24.016: INFO: stdout: "2020-09-21T11:02:23.995257447Z I0921 11:02:23.995114 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/bpv 408\n" Sep 21 11:02:24.017: INFO: got output "2020-09-21T11:02:23.995257447Z I0921 11:02:23.995114 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/bpv 408\n" STEP: restricting to a time range Sep 21 11:02:26.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2988 --since=1s' Sep 21 11:02:27.793: INFO: stderr: "" Sep 21 11:02:27.793: INFO: stdout: "I0921 11:02:26.795097 1 logs_generator.go:76] 48 GET /api/v1/namespaces/default/pods/hs8c 242\nI0921 11:02:26.995157 1 logs_generator.go:76] 49 GET /api/v1/namespaces/default/pods/f75f 489\nI0921 11:02:27.195061 1 logs_generator.go:76] 50 PUT /api/v1/namespaces/default/pods/xlq 382\nI0921 11:02:27.395164 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/ns/pods/c4fc 374\nI0921 11:02:27.595151 1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/8266 350\n" Sep 21 11:02:27.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2988 --since=24h' Sep 21 11:02:29.141: INFO: stderr: "" Sep 21 11:02:29.141: INFO: stdout: "I0921 11:02:17.194977 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/5sp 200\nI0921 11:02:17.395187 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/pr2 398\nI0921 11:02:17.595225 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/zkg4 551\nI0921 11:02:17.795189 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/7lh 551\nI0921 11:02:17.995131 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/bh69 405\nI0921 11:02:18.195175 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/dx9d 218\nI0921 11:02:18.395148 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/pck 500\nI0921 11:02:18.595147 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5t8z 432\nI0921 11:02:18.795182 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/2rtv 319\nI0921 11:02:18.995131 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/w26w 326\nI0921 11:02:19.195138 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/5hrj 467\nI0921 11:02:19.395118 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/j84 468\nI0921 11:02:19.595142 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/lfpj 596\nI0921 11:02:19.795159 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/xrq 532\nI0921 11:02:19.995113 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/6f6 209\nI0921 11:02:20.195176 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/zfpn 422\nI0921 11:02:20.395138 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/c2h8 273\nI0921 11:02:20.595175 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/rfws 519\nI0921 11:02:20.795158 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/nhtz 583\nI0921 11:02:20.995137 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/h225 557\nI0921 11:02:21.195123 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/g26 306\nI0921 11:02:21.395130 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/2xhg 254\nI0921 11:02:21.595113 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/8m8 394\nI0921 11:02:21.795104 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/4qkf 578\nI0921 11:02:21.995089 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/c4d 445\nI0921 11:02:22.195074 1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/jdp 407\nI0921 11:02:22.395159 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/bt2m 537\nI0921 11:02:22.595127 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/s24 310\nI0921 11:02:22.795085 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/sfq 239\nI0921 11:02:22.995116 1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/qlg4 271\nI0921 11:02:23.195099 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/kube-system/pods/r6ph 486\nI0921 11:02:23.395167 1 logs_generator.go:76] 31 POST /api/v1/namespaces/ns/pods/kdvw 384\nI0921 11:02:23.595177 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/7js 439\nI0921 11:02:23.795140 1 logs_generator.go:76] 33 POST /api/v1/namespaces/default/pods/b2jq 555\nI0921 11:02:23.995114 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/bpv 408\nI0921 11:02:24.195131 1 logs_generator.go:76] 35 GET /api/v1/namespaces/ns/pods/5k7f 386\nI0921 11:02:24.395157 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/bp5 489\nI0921 11:02:24.595153 1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/pxnl 241\nI0921 11:02:24.795126 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/default/pods/twzm 549\nI0921 11:02:24.995158 1 logs_generator.go:76] 39 PUT /api/v1/namespaces/ns/pods/cm4 597\nI0921 11:02:25.195116 1 logs_generator.go:76] 40 POST /api/v1/namespaces/default/pods/5r8l 478\nI0921 11:02:25.395193 1 logs_generator.go:76] 41 GET /api/v1/namespaces/default/pods/jf9 307\nI0921 11:02:25.595134 1 logs_generator.go:76] 42 POST /api/v1/namespaces/kube-system/pods/gvpr 232\nI0921 11:02:25.795179 1 logs_generator.go:76] 43 POST /api/v1/namespaces/default/pods/bp4 465\nI0921 11:02:25.995156 1 logs_generator.go:76] 44 POST /api/v1/namespaces/ns/pods/sd5d 315\nI0921 11:02:26.195172 1 logs_generator.go:76] 45 GET /api/v1/namespaces/ns/pods/lhh 294\nI0921 11:02:26.395165 1 logs_generator.go:76] 46 GET /api/v1/namespaces/kube-system/pods/twzf 343\nI0921 11:02:26.595136 1 logs_generator.go:76] 47 POST /api/v1/namespaces/default/pods/fjgs 303\nI0921 11:02:26.795097 1 logs_generator.go:76] 48 GET /api/v1/namespaces/default/pods/hs8c 242\nI0921 11:02:26.995157 1 logs_generator.go:76] 49 GET /api/v1/namespaces/default/pods/f75f 489\nI0921 11:02:27.195061 1 logs_generator.go:76] 50 PUT /api/v1/namespaces/default/pods/xlq 382\nI0921 11:02:27.395164 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/ns/pods/c4fc 374\nI0921 11:02:27.595151 1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/8266 350\nI0921 11:02:27.795168 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/kube-system/pods/xn8 582\nI0921 11:02:27.995119 1 logs_generator.go:76] 54 PUT /api/v1/namespaces/default/pods/gm8 230\nI0921 11:02:28.195139 1 logs_generator.go:76] 55 GET /api/v1/namespaces/kube-system/pods/5hw2 451\nI0921 11:02:28.395129 1 logs_generator.go:76] 56 PUT /api/v1/namespaces/default/pods/ct9v 458\nI0921 11:02:28.595139 1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/drz 454\nI0921 11:02:28.795146 1 logs_generator.go:76] 58 GET /api/v1/namespaces/default/pods/ptz 475\nI0921 11:02:28.995141 1 logs_generator.go:76] 59 POST /api/v1/namespaces/kube-system/pods/hsvq 231\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Sep 21 11:02:29.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2988' Sep 21 11:02:43.198: INFO: stderr: "" Sep 21 11:02:43.198: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:02:43.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2988" for this suite. • [SLOW TEST:29.889 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":142,"skipped":2394,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:02:43.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:02:43.283: INFO: Creating deployment "test-recreate-deployment" Sep 21 11:02:43.293: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 21 11:02:43.397: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 21 11:02:45.410: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 21 11:02:45.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282963, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282963, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282963, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736282963, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 11:02:47.423: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 21 11:02:47.437: INFO: Updating deployment test-recreate-deployment Sep 21 11:02:47.437: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 21 11:02:48.045: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8112 /apis/apps/v1/namespaces/deployment-8112/deployments/test-recreate-deployment 03d17765-b501-4c28-ac10-b4471bc45a6e 2061787 2 2020-09-21 11:02:43 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-21 11:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 11:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8b9f268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-21 11:02:47 +0000 UTC,LastTransitionTime:2020-09-21 11:02:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-09-21 11:02:47 +0000 UTC,LastTransitionTime:2020-09-21 11:02:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 21 11:02:48.085: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-8112 /apis/apps/v1/namespaces/deployment-8112/replicasets/test-recreate-deployment-f79dd4667 6e9a5b6b-8427-46e9-a997-2ab0e99cea30 2061786 1 2020-09-21 11:02:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 03d17765-b501-4c28-ac10-b4471bc45a6e 0x744d190 0x744d191}] [] [{kube-controller-manager Update apps/v1 2020-09-21 11:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"03d17765-b501-4c28-ac10-b4471bc45a6e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x744d208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 21 11:02:48.086: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 21 11:02:48.086: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-8112 /apis/apps/v1/namespaces/deployment-8112/replicasets/test-recreate-deployment-c96cf48f aedbf083-1826-4681-b032-0c06ade55cf8 2061775 2 2020-09-21 11:02:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 03d17765-b501-4c28-ac10-b4471bc45a6e 0x744d09f 0x744d0b0}] [] [{kube-controller-manager Update apps/v1 2020-09-21 11:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"03d17765-b501-4c28-ac10-b4471bc45a6e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x744d128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 21 11:02:48.099: INFO: Pod "test-recreate-deployment-f79dd4667-mfl5m" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-mfl5m test-recreate-deployment-f79dd4667- deployment-8112 /api/v1/namespaces/deployment-8112/pods/test-recreate-deployment-f79dd4667-mfl5m 6f5ecc65-120a-4b2e-bf57-d1d73208055b 2061789 0 2020-09-21 11:02:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 6e9a5b6b-8427-46e9-a997-2ab0e99cea30 0x8b9f640 0x8b9f641}] [] [{kube-controller-manager Update v1 2020-09-21 11:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e9a5b6b-8427-46e9-a997-2ab0e99cea30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 11:02:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gj44g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gj44g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gj44g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 11:02:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 11:02:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 11:02:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 11:02:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-09-21 11:02:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:02:48.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8112" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":143,"skipped":2397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:02:48.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:02:54.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7892" for this suite. • [SLOW TEST:6.612 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2434,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:02:54.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0921 11:03:06.419203 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 21 11:04:08.446: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 21 11:04:08.447: INFO: Deleting pod "simpletest-rc-to-be-deleted-4dr7b" in namespace "gc-1231" Sep 21 11:04:08.489: INFO: Deleting pod "simpletest-rc-to-be-deleted-6vfq4" in namespace "gc-1231" Sep 21 11:04:08.563: INFO: Deleting pod "simpletest-rc-to-be-deleted-89fgv" in namespace "gc-1231" Sep 21 11:04:08.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-99r5c" in namespace "gc-1231" Sep 21 11:04:08.828: INFO: Deleting pod "simpletest-rc-to-be-deleted-9zgt2" in namespace "gc-1231" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:04:08.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1231" for this suite. • [SLOW TEST:74.283 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":145,"skipped":2447,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:04:09.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:04:28.424: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:04:30.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283068, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283068, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283068, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283068, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:04:33.736: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 21 11:04:37.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config attach --namespace=webhook-4159 to-be-attached-pod -i -c=container1' Sep 21 11:04:39.146: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:04:39.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4159" for this suite. STEP: Destroying namespace "webhook-4159-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:30.230 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":146,"skipped":2464,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:04:39.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:04:56.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4029" for this suite. • [SLOW TEST:17.216 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":147,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:04:56.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:04:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5218" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":148,"skipped":2508,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:04:56.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:05:03.222: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:05:05.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283103, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283103, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283103, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283103, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:05:08.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:05:18.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1019" for this suite. STEP: Destroying namespace "webhook-1019-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.092 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":149,"skipped":2528,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:05:18.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:05:30.842: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:05:32.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283130, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283130, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283130, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283130, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:05:35.909: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:05:35.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6238-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:05:37.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1254" for this suite. STEP: Destroying namespace "webhook-1254-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.423 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":150,"skipped":2549,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:05:37.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-pvm9 STEP: Creating a pod to test atomic-volume-subpath Sep 21 11:05:37.258: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pvm9" in namespace "subpath-434" to be "Succeeded or Failed" Sep 21 11:05:37.263: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.861561ms Sep 21 11:05:39.271: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012504032s Sep 21 11:05:41.280: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 4.021592496s Sep 21 11:05:43.289: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 6.030865696s Sep 21 11:05:45.296: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 8.038215662s Sep 21 11:05:47.305: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 10.04627305s Sep 21 11:05:49.311: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 12.053150707s Sep 21 11:05:51.320: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 14.061804501s Sep 21 11:05:53.328: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 16.069657532s Sep 21 11:05:55.336: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 18.077709568s Sep 21 11:05:57.345: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 20.086608469s Sep 21 11:05:59.353: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Running", Reason="", readiness=true. Elapsed: 22.094812615s Sep 21 11:06:01.360: INFO: Pod "pod-subpath-test-projected-pvm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.102223963s STEP: Saw pod success Sep 21 11:06:01.361: INFO: Pod "pod-subpath-test-projected-pvm9" satisfied condition "Succeeded or Failed" Sep 21 11:06:01.366: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-pvm9 container test-container-subpath-projected-pvm9: STEP: delete the pod Sep 21 11:06:01.401: INFO: Waiting for pod pod-subpath-test-projected-pvm9 to disappear Sep 21 11:06:01.405: INFO: Pod pod-subpath-test-projected-pvm9 no longer exists STEP: Deleting pod pod-subpath-test-projected-pvm9 Sep 21 11:06:01.405: INFO: Deleting pod "pod-subpath-test-projected-pvm9" in namespace "subpath-434" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:06:01.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-434" for this suite. • [SLOW TEST:24.241 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":151,"skipped":2559,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:06:01.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:06:12.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5977" for this suite. • [SLOW TEST:11.214 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":152,"skipped":2579,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:06:12.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:06:12.749: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Pending, waiting for it to be Running (with Ready = true) Sep 21 11:06:14.757: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Pending, waiting for it to be Running (with Ready = true) Sep 21 11:06:16.773: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:18.761: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:20.757: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:22.758: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:24.758: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:26.758: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:28.758: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:30.756: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:32.757: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = false) Sep 21 11:06:34.758: INFO: The status of Pod test-webserver-274c0408-b7b0-4cea-9d43-997ee29f29d4 is Running (Ready = true) Sep 21 11:06:34.765: INFO: Container started at 2020-09-21 11:06:15 +0000 UTC, pod became ready at 2020-09-21 11:06:33 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:06:34.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4482" for this suite. • [SLOW TEST:22.137 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":153,"skipped":2592,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:06:34.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-dp89 STEP: Creating a pod to test atomic-volume-subpath Sep 21 11:06:34.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dp89" in namespace "subpath-1596" to be "Succeeded or Failed" Sep 21 11:06:34.916: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232861ms Sep 21 11:06:36.924: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015388383s Sep 21 11:06:38.932: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 4.023599592s Sep 21 11:06:40.938: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 6.030128839s Sep 21 11:06:42.947: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 8.038399682s Sep 21 11:06:44.956: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 10.047647544s Sep 21 11:06:46.964: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 12.055323133s Sep 21 11:06:48.973: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 14.064256238s Sep 21 11:06:50.980: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 16.072049919s Sep 21 11:06:52.989: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 18.080574688s Sep 21 11:06:54.997: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 20.089197814s Sep 21 11:06:57.005: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Running", Reason="", readiness=true. Elapsed: 22.096729233s Sep 21 11:06:59.013: INFO: Pod "pod-subpath-test-secret-dp89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.104729495s STEP: Saw pod success Sep 21 11:06:59.013: INFO: Pod "pod-subpath-test-secret-dp89" satisfied condition "Succeeded or Failed" Sep 21 11:06:59.019: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-dp89 container test-container-subpath-secret-dp89: STEP: delete the pod Sep 21 11:06:59.141: INFO: Waiting for pod pod-subpath-test-secret-dp89 to disappear Sep 21 11:06:59.148: INFO: Pod pod-subpath-test-secret-dp89 no longer exists STEP: Deleting pod pod-subpath-test-secret-dp89 Sep 21 11:06:59.149: INFO: Deleting pod "pod-subpath-test-secret-dp89" in namespace "subpath-1596" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:06:59.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1596" for this suite. • [SLOW TEST:24.381 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":154,"skipped":2609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:06:59.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0921 11:07:00.072696 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 21 11:08:02.096: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:08:02.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6522" for this suite. • [SLOW TEST:62.944 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":155,"skipped":2640,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:08:02.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:08:19.506: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:08:21.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283299, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283299, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283299, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283299, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:08:24.717: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:08:36.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-294" for this suite. STEP: Destroying namespace "webhook-294-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:34.948 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":156,"skipped":2640,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:08:37.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:08:49.379: INFO: Checking APIGroup: apiregistration.k8s.io Sep 21 11:08:49.382: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 21 11:08:49.382: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.382: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 21 11:08:49.382: INFO: Checking APIGroup: extensions Sep 21 11:08:49.384: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 21 11:08:49.384: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 21 11:08:49.384: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 21 11:08:49.385: INFO: Checking APIGroup: apps Sep 21 11:08:49.386: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 21 11:08:49.386: INFO: Versions found [{apps/v1 v1}] Sep 21 11:08:49.386: INFO: apps/v1 matches apps/v1 Sep 21 11:08:49.386: INFO: Checking APIGroup: events.k8s.io Sep 21 11:08:49.389: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 21 11:08:49.389: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.389: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 21 11:08:49.389: INFO: Checking APIGroup: authentication.k8s.io Sep 21 11:08:49.391: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 21 11:08:49.391: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.391: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 21 11:08:49.392: INFO: Checking APIGroup: authorization.k8s.io Sep 21 11:08:49.394: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 21 11:08:49.394: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.394: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 21 11:08:49.394: INFO: Checking APIGroup: autoscaling Sep 21 11:08:49.396: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 21 11:08:49.396: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 21 11:08:49.396: INFO: autoscaling/v1 matches autoscaling/v1 Sep 21 11:08:49.396: INFO: Checking APIGroup: batch Sep 21 11:08:49.398: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 21 11:08:49.398: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 21 11:08:49.398: INFO: batch/v1 matches batch/v1 Sep 21 11:08:49.399: INFO: Checking APIGroup: certificates.k8s.io Sep 21 11:08:49.401: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 21 11:08:49.401: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.401: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 21 11:08:49.401: INFO: Checking APIGroup: networking.k8s.io Sep 21 11:08:49.403: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 21 11:08:49.403: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.403: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 21 11:08:49.403: INFO: Checking APIGroup: policy Sep 21 11:08:49.405: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 21 11:08:49.405: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 21 11:08:49.405: INFO: policy/v1beta1 matches policy/v1beta1 Sep 21 11:08:49.405: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 21 11:08:49.407: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 21 11:08:49.407: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.407: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 21 11:08:49.407: INFO: Checking APIGroup: storage.k8s.io Sep 21 11:08:49.409: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 21 11:08:49.409: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.409: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 21 11:08:49.410: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 21 11:08:49.412: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 21 11:08:49.412: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.412: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 21 11:08:49.412: INFO: Checking APIGroup: apiextensions.k8s.io Sep 21 11:08:49.414: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 21 11:08:49.414: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.414: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 21 11:08:49.414: INFO: Checking APIGroup: scheduling.k8s.io Sep 21 11:08:49.416: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 21 11:08:49.416: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.416: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 21 11:08:49.416: INFO: Checking APIGroup: coordination.k8s.io Sep 21 11:08:49.419: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 21 11:08:49.419: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.419: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 21 11:08:49.419: INFO: Checking APIGroup: node.k8s.io Sep 21 11:08:49.421: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 21 11:08:49.421: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.421: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 21 11:08:49.421: INFO: Checking APIGroup: discovery.k8s.io Sep 21 11:08:49.423: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 21 11:08:49.424: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 21 11:08:49.424: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:08:49.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-5098" for this suite. • [SLOW TEST:12.376 seconds] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":157,"skipped":2649,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:08:49.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:10:49.566: INFO: Deleting pod "var-expansion-a26034a9-8b88-49ef-9831-37f194a548a0" in namespace "var-expansion-8135" Sep 21 11:10:49.573: INFO: Wait up to 5m0s for pod "var-expansion-a26034a9-8b88-49ef-9831-37f194a548a0" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:10:53.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8135" for this suite. • [SLOW TEST:124.157 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":158,"skipped":2651,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:10:53.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 21 11:10:53.669: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:10:54.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7047" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":159,"skipped":2663,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:10:54.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:10:54.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8" in namespace "projected-2711" to be "Succeeded or Failed" Sep 21 11:10:54.914: INFO: Pod "downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.956077ms Sep 21 11:10:56.923: INFO: Pod "downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012482556s Sep 21 11:10:58.956: INFO: Pod "downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045461393s STEP: Saw pod success Sep 21 11:10:58.956: INFO: Pod "downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8" satisfied condition "Succeeded or Failed" Sep 21 11:10:58.962: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8 container client-container: STEP: delete the pod Sep 21 11:10:59.018: INFO: Waiting for pod downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8 to disappear Sep 21 11:10:59.031: INFO: Pod downwardapi-volume-c01f8e8f-7c79-4697-8232-d21efd0b1ce8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:10:59.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2711" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2674,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:10:59.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 21 11:11:03.272: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:11:03.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9805" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:11:03.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2370 STEP: creating service affinity-clusterip in namespace services-2370 STEP: creating replication controller affinity-clusterip in namespace services-2370 I0921 11:11:03.739238 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2370, replica count: 3 I0921 11:11:06.790687 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:11:09.791555 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 11:11:09.802: INFO: Creating new exec pod Sep 21 11:11:14.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2370 execpod-affinity8w2wz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 21 11:11:19.335: INFO: stderr: "I0921 11:11:19.196579 2696 log.go:181] (0x2e980e0) (0x2e98150) Create stream\nI0921 11:11:19.200310 2696 log.go:181] (0x2e980e0) (0x2e98150) Stream added, broadcasting: 1\nI0921 11:11:19.213595 2696 log.go:181] (0x2e980e0) Reply frame received for 1\nI0921 11:11:19.214573 2696 log.go:181] (0x2e980e0) (0x2e98310) Create stream\nI0921 11:11:19.214749 2696 log.go:181] (0x2e980e0) (0x2e98310) Stream added, broadcasting: 3\nI0921 11:11:19.218851 2696 log.go:181] (0x2e980e0) Reply frame received for 3\nI0921 11:11:19.219367 2696 log.go:181] (0x2e980e0) (0x2bd0070) Create stream\nI0921 11:11:19.219478 2696 log.go:181] (0x2e980e0) (0x2bd0070) Stream added, broadcasting: 5\nI0921 11:11:19.221799 2696 log.go:181] (0x2e980e0) Reply frame received for 5\nI0921 11:11:19.315670 2696 log.go:181] (0x2e980e0) Data frame received for 3\nI0921 11:11:19.315939 2696 log.go:181] (0x2e980e0) Data frame received for 5\nI0921 11:11:19.316209 2696 log.go:181] (0x2e980e0) Data frame received for 1\nI0921 11:11:19.316694 2696 log.go:181] (0x2e98150) (1) Data frame handling\nI0921 11:11:19.316799 2696 log.go:181] (0x2e98310) (3) Data frame handling\nI0921 11:11:19.317009 2696 log.go:181] (0x2bd0070) (5) Data frame handling\nI0921 11:11:19.317814 2696 log.go:181] (0x2bd0070) (5) Data frame sent\nI0921 11:11:19.317963 2696 log.go:181] (0x2e98150) (1) Data frame sent\nI0921 11:11:19.318224 2696 log.go:181] (0x2e980e0) Data frame received for 5\nI0921 11:11:19.318311 2696 log.go:181] (0x2bd0070) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0921 11:11:19.319634 2696 log.go:181] (0x2e980e0) (0x2e98150) Stream removed, broadcasting: 1\nI0921 11:11:19.321074 2696 log.go:181] (0x2bd0070) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0921 11:11:19.321151 2696 log.go:181] (0x2e980e0) Data frame received for 5\nI0921 11:11:19.321230 2696 log.go:181] (0x2bd0070) (5) Data frame handling\nI0921 11:11:19.322994 2696 log.go:181] (0x2e980e0) Go away received\nI0921 11:11:19.324353 2696 log.go:181] (0x2e980e0) (0x2e98150) Stream removed, broadcasting: 1\nI0921 11:11:19.324710 2696 log.go:181] (0x2e980e0) (0x2e98310) Stream removed, broadcasting: 3\nI0921 11:11:19.324927 2696 log.go:181] (0x2e980e0) (0x2bd0070) Stream removed, broadcasting: 5\n" Sep 21 11:11:19.336: INFO: stdout: "" Sep 21 11:11:19.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2370 execpod-affinity8w2wz -- /bin/sh -x -c nc -zv -t -w 2 10.110.126.225 80' Sep 21 11:11:20.829: INFO: stderr: "I0921 11:11:20.727331 2717 log.go:181] (0x299c620) (0x299ce70) Create stream\nI0921 11:11:20.731291 2717 log.go:181] (0x299c620) (0x299ce70) Stream added, broadcasting: 1\nI0921 11:11:20.749655 2717 log.go:181] (0x299c620) Reply frame received for 1\nI0921 11:11:20.750494 2717 log.go:181] (0x299c620) (0x28045b0) Create stream\nI0921 11:11:20.750659 2717 log.go:181] (0x299c620) (0x28045b0) Stream added, broadcasting: 3\nI0921 11:11:20.754698 2717 log.go:181] (0x299c620) Reply frame received for 3\nI0921 11:11:20.755051 2717 log.go:181] (0x299c620) (0x2d02540) Create stream\nI0921 11:11:20.755131 2717 log.go:181] (0x299c620) (0x2d02540) Stream added, broadcasting: 5\nI0921 11:11:20.756742 2717 log.go:181] (0x299c620) Reply frame received for 5\nI0921 11:11:20.814293 2717 log.go:181] (0x299c620) Data frame received for 3\nI0921 11:11:20.814674 2717 log.go:181] (0x28045b0) (3) Data frame handling\nI0921 11:11:20.815023 2717 log.go:181] (0x299c620) Data frame received for 5\nI0921 11:11:20.815245 2717 log.go:181] (0x2d02540) (5) Data frame handling\nI0921 11:11:20.816088 2717 log.go:181] (0x299c620) Data frame received for 1\nI0921 11:11:20.816321 2717 log.go:181] (0x299ce70) (1) Data frame handling\nI0921 11:11:20.817148 2717 log.go:181] (0x2d02540) (5) Data frame sent\nI0921 11:11:20.817326 2717 log.go:181] (0x299ce70) (1) Data frame sent\nI0921 11:11:20.817439 2717 log.go:181] (0x299c620) Data frame received for 5\nI0921 11:11:20.817531 2717 log.go:181] (0x2d02540) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.126.225 80\nConnection to 10.110.126.225 80 port [tcp/http] succeeded!\nI0921 11:11:20.818690 2717 log.go:181] (0x299c620) (0x299ce70) Stream removed, broadcasting: 1\nI0921 11:11:20.820576 2717 log.go:181] (0x299c620) Go away received\nI0921 11:11:20.822660 2717 log.go:181] (0x299c620) (0x299ce70) Stream removed, broadcasting: 1\nI0921 11:11:20.822855 2717 log.go:181] (0x299c620) (0x28045b0) Stream removed, broadcasting: 3\nI0921 11:11:20.823013 2717 log.go:181] (0x299c620) (0x2d02540) Stream removed, broadcasting: 5\n" Sep 21 11:11:20.831: INFO: stdout: "" Sep 21 11:11:20.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-2370 execpod-affinity8w2wz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.126.225:80/ ; done' Sep 21 11:11:22.403: INFO: stderr: "I0921 11:11:22.195120 2737 log.go:181] (0x2bcfce0) (0x2bcfd50) Create stream\nI0921 11:11:22.196904 2737 log.go:181] (0x2bcfce0) (0x2bcfd50) Stream added, broadcasting: 1\nI0921 11:11:22.207487 2737 log.go:181] (0x2bcfce0) Reply frame received for 1\nI0921 11:11:22.208289 2737 log.go:181] (0x2bcfce0) (0x2bcff10) Create stream\nI0921 11:11:22.208413 2737 log.go:181] (0x2bcfce0) (0x2bcff10) Stream added, broadcasting: 3\nI0921 11:11:22.209936 2737 log.go:181] (0x2bcfce0) Reply frame received for 3\nI0921 11:11:22.210170 2737 log.go:181] (0x2bcfce0) (0x25ca380) Create stream\nI0921 11:11:22.210239 2737 log.go:181] (0x2bcfce0) (0x25ca380) Stream added, broadcasting: 5\nI0921 11:11:22.211641 2737 log.go:181] (0x2bcfce0) Reply frame received for 5\nI0921 11:11:22.297594 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.298407 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.299232 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.300996 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.301329 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.301718 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.307890 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.308029 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.308118 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.308283 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.308385 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.308638 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.308780 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.308943 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.309070 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.311139 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.311212 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.311287 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.311707 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.311781 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/I0921 11:11:22.311874 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.311955 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.312016 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.312114 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.312264 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.312358 2737 log.go:181] (0x25ca380) (5) Data frame sent\n\nI0921 11:11:22.312418 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.316380 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.316445 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.316526 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.316964 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.317031 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.317143 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.317224 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.317309 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.317382 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.320768 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.320889 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.320991 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.321188 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.321278 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.321351 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.321418 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.321482 2737 log.go:181] (0x2bcff10) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.321557 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.324739 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.324833 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.324919 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.325197 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.325308 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.325418 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.325536 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.325616 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.325685 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.328081 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.328260 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.328352 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.329028 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.329136 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.329219 2737 log.go:181] (0x25ca380) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.329293 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.329351 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.329423 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.332916 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.333042 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.333163 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.333313 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.333383 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.333446 2737 log.go:181] (0x25ca380) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.333517 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.333572 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.333638 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.337103 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.337245 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.337399 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.337750 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.337874 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.337955 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.338278 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.338371 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.338460 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.341221 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.341320 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.341428 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.341986 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.342103 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.342225 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.342334 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.342420 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.342535 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.346517 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.346627 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.346760 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.347279 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.347386 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.347483 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.347629 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -sI0921 11:11:22.347768 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.347871 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.347970 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.348053 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.348230 2737 log.go:181] (0x25ca380) (5) Data frame sent\n --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.351662 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.351774 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.351896 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.352361 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.352471 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.352578 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.352675 2737 log.go:181] (0x25ca380) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.352763 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.352862 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.359429 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.359530 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.359619 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.360236 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.360323 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.360421 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.360614 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.360785 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.360906 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.364106 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.364378 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.364525 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.364629 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.364737 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.364882 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.364965 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.365052 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.365151 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.371148 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.371261 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.371364 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.372323 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.372504 2737 log.go:181] (0x25ca380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.372644 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.372809 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.372952 2737 log.go:181] (0x25ca380) (5) Data frame sent\nI0921 11:11:22.373044 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.378522 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.378691 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.378874 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.379030 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.379200 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.379317 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.379459 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.379572 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.379712 2737 log.go:181] (0x25ca380) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.126.225:80/\nI0921 11:11:22.385498 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.385623 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.385739 2737 log.go:181] (0x2bcff10) (3) Data frame sent\nI0921 11:11:22.386449 2737 log.go:181] (0x2bcfce0) Data frame received for 5\nI0921 11:11:22.386576 2737 log.go:181] (0x25ca380) (5) Data frame handling\nI0921 11:11:22.386716 2737 log.go:181] (0x2bcfce0) Data frame received for 3\nI0921 11:11:22.386910 2737 log.go:181] (0x2bcff10) (3) Data frame handling\nI0921 11:11:22.388891 2737 log.go:181] (0x2bcfce0) Data frame received for 1\nI0921 11:11:22.389009 2737 log.go:181] (0x2bcfd50) (1) Data frame handling\nI0921 11:11:22.389121 2737 log.go:181] (0x2bcfd50) (1) Data frame sent\nI0921 11:11:22.390142 2737 log.go:181] (0x2bcfce0) (0x2bcfd50) Stream removed, broadcasting: 1\nI0921 11:11:22.392462 2737 log.go:181] (0x2bcfce0) Go away received\nI0921 11:11:22.395410 2737 log.go:181] (0x2bcfce0) (0x2bcfd50) Stream removed, broadcasting: 1\nI0921 11:11:22.395555 2737 log.go:181] (0x2bcfce0) (0x2bcff10) Stream removed, broadcasting: 3\nI0921 11:11:22.395674 2737 log.go:181] (0x2bcfce0) (0x25ca380) Stream removed, broadcasting: 5\n" Sep 21 11:11:22.410: INFO: stdout: "\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574\naffinity-clusterip-sd574" Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Received response from host: affinity-clusterip-sd574 Sep 21 11:11:22.411: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2370, will wait for the garbage collector to delete the pods Sep 21 11:11:22.552: INFO: Deleting ReplicationController affinity-clusterip took: 8.316226ms Sep 21 11:11:23.053: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.926351ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:11:33.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2370" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.698 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":162,"skipped":2751,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:11:33.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 21 11:11:38.004: INFO: Successfully updated pod "annotationupdate1d536f0e-6c05-4706-99f6-cc6817eac1cf" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:11:42.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1621" for this suite. • [SLOW TEST:8.747 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2758,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:11:42.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9f2956e1-b579-49b0-b773-7dc332ee757e STEP: Creating a pod to test consume secrets Sep 21 11:11:42.148: INFO: Waiting up to 5m0s for pod "pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8" in namespace "secrets-3756" to be "Succeeded or Failed" Sep 21 11:11:42.182: INFO: Pod "pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.848576ms Sep 21 11:11:44.190: INFO: Pod "pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042225097s Sep 21 11:11:46.198: INFO: Pod "pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8": Phase="Running", Reason="", readiness=true. Elapsed: 4.050646598s Sep 21 11:11:48.207: INFO: Pod "pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058974361s STEP: Saw pod success Sep 21 11:11:48.207: INFO: Pod "pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8" satisfied condition "Succeeded or Failed" Sep 21 11:11:48.212: INFO: Trying to get logs from node kali-worker pod pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8 container secret-env-test: STEP: delete the pod Sep 21 11:11:48.322: INFO: Waiting for pod pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8 to disappear Sep 21 11:11:48.327: INFO: Pod pod-secrets-3387c279-b9b1-4968-b5c5-0ac092ad59e8 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:11:48.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3756" for this suite. • [SLOW TEST:6.291 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2775,"failed":0} [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:11:48.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-5a359234-203d-459c-be4d-c64964047120 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:11:48.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4007" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":165,"skipped":2775,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:11:48.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 21 11:11:48.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7934' Sep 21 11:11:49.815: INFO: stderr: "" Sep 21 11:11:49.815: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 21 11:11:49.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7934' Sep 21 11:12:03.230: INFO: stderr: "" Sep 21 11:12:03.231: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:12:03.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7934" for this suite. • [SLOW TEST:14.795 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":166,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:12:03.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:12:03.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320" in namespace "downward-api-9152" to be "Succeeded or Failed" Sep 21 11:12:03.357: INFO: Pod "downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320": Phase="Pending", Reason="", readiness=false. Elapsed: 26.550768ms Sep 21 11:12:05.366: INFO: Pod "downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035166565s Sep 21 11:12:07.373: INFO: Pod "downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042674149s STEP: Saw pod success Sep 21 11:12:07.373: INFO: Pod "downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320" satisfied condition "Succeeded or Failed" Sep 21 11:12:07.379: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320 container client-container: STEP: delete the pod Sep 21 11:12:07.440: INFO: Waiting for pod downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320 to disappear Sep 21 11:12:07.448: INFO: Pod downwardapi-volume-b52270c3-fdfd-4d8b-bc6f-6c67b6e7c320 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:12:07.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9152" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2797,"failed":0} SSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:12:07.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 21 11:12:07.781: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:12:07.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2185" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":168,"skipped":2808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:12:07.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 21 11:12:07.931: INFO: Waiting up to 5m0s for pod "client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9" in namespace "containers-2775" to be "Succeeded or Failed" Sep 21 11:12:07.934: INFO: Pod "client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005996ms Sep 21 11:12:09.955: INFO: Pod "client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023730638s Sep 21 11:12:11.966: INFO: Pod "client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034814429s STEP: Saw pod success Sep 21 11:12:11.966: INFO: Pod "client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9" satisfied condition "Succeeded or Failed" Sep 21 11:12:11.970: INFO: Trying to get logs from node kali-worker2 pod client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9 container test-container: STEP: delete the pod Sep 21 11:12:12.005: INFO: Waiting for pod client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9 to disappear Sep 21 11:12:12.011: INFO: Pod client-containers-3a33218f-0b77-4e79-8bb8-a9f40361c9b9 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:12:12.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2775" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:12:12.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:12:12.177: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 21 11:12:12.190: INFO: Number of nodes with available pods: 0 Sep 21 11:12:12.190: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 21 11:12:12.243: INFO: Number of nodes with available pods: 0 Sep 21 11:12:12.243: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:13.251: INFO: Number of nodes with available pods: 0 Sep 21 11:12:13.251: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:14.268: INFO: Number of nodes with available pods: 0 Sep 21 11:12:14.268: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:15.252: INFO: Number of nodes with available pods: 0 Sep 21 11:12:15.252: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:16.252: INFO: Number of nodes with available pods: 1 Sep 21 11:12:16.252: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 21 11:12:16.301: INFO: Number of nodes with available pods: 1 Sep 21 11:12:16.301: INFO: Number of running nodes: 0, number of available pods: 1 Sep 21 11:12:17.308: INFO: Number of nodes with available pods: 0 Sep 21 11:12:17.308: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 21 11:12:17.360: INFO: Number of nodes with available pods: 0 Sep 21 11:12:17.360: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:18.375: INFO: Number of nodes with available pods: 0 Sep 21 11:12:18.375: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:19.380: INFO: Number of nodes with available pods: 0 Sep 21 11:12:19.380: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:20.369: INFO: Number of nodes with available pods: 0 Sep 21 11:12:20.369: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:21.369: INFO: Number of nodes with available pods: 0 Sep 21 11:12:21.369: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:12:22.368: INFO: Number of nodes with available pods: 1 Sep 21 11:12:22.368: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1481, will wait for the garbage collector to delete the pods Sep 21 11:12:22.470: INFO: Deleting DaemonSet.extensions daemon-set took: 17.499594ms Sep 21 11:12:22.871: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.067421ms Sep 21 11:12:33.289: INFO: Number of nodes with available pods: 0 Sep 21 11:12:33.289: INFO: Number of running nodes: 0, number of available pods: 0 Sep 21 11:12:33.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1481/daemonsets","resourceVersion":"2064640"},"items":null} Sep 21 11:12:33.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1481/pods","resourceVersion":"2064640"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:12:33.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1481" for this suite. • [SLOW TEST:21.319 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":170,"skipped":2863,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:12:33.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-26cae135-5cf4-44c2-8c43-d78211232fa4 in namespace container-probe-8669 Sep 21 11:12:37.553: INFO: Started pod test-webserver-26cae135-5cf4-44c2-8c43-d78211232fa4 in namespace container-probe-8669 STEP: checking the pod's current state and verifying that restartCount is present Sep 21 11:12:37.559: INFO: Initial restart count of pod test-webserver-26cae135-5cf4-44c2-8c43-d78211232fa4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:16:38.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8669" for this suite. • [SLOW TEST:245.299 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2866,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:16:38.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:16:38.934: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 21 11:16:41.002: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:16:41.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5010" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":172,"skipped":2873,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:16:41.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:16:41.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4771" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":173,"skipped":2877,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:16:41.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-2b5d9ea2-82e9-4c49-99a9-b255cf520b70 STEP: Creating secret with name secret-projected-all-test-volume-d56f67c0-a155-424a-9b7e-6c69d3bae424 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 21 11:16:43.266: INFO: Waiting up to 5m0s for pod "projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f" in namespace "projected-7483" to be "Succeeded or Failed" Sep 21 11:16:43.304: INFO: Pod "projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.459657ms Sep 21 11:16:45.425: INFO: Pod "projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159075778s Sep 21 11:16:47.480: INFO: Pod "projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f": Phase="Running", Reason="", readiness=true. Elapsed: 4.213765604s Sep 21 11:16:49.488: INFO: Pod "projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221747987s STEP: Saw pod success Sep 21 11:16:49.488: INFO: Pod "projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f" satisfied condition "Succeeded or Failed" Sep 21 11:16:49.494: INFO: Trying to get logs from node kali-worker pod projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f container projected-all-volume-test: STEP: delete the pod Sep 21 11:16:49.533: INFO: Waiting for pod projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f to disappear Sep 21 11:16:49.554: INFO: Pod projected-volume-24d0d1da-dced-4bf7-abf0-5ac40f68465f no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:16:49.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7483" for this suite. • [SLOW TEST:7.598 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2885,"failed":0} [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:16:49.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 21 11:16:49.711: INFO: Waiting up to 1m0s for all nodes to be ready Sep 21 11:17:49.802: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 21 11:17:49.867: INFO: Created pod: pod0-sched-preemption-low-priority Sep 21 11:17:50.130: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:18:18.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9122" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.901 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":175,"skipped":2885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:18:18.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 21 11:18:18.786: INFO: Waiting up to 5m0s for pod "client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7" in namespace "containers-1188" to be "Succeeded or Failed" Sep 21 11:18:18.820: INFO: Pod "client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.165836ms Sep 21 11:18:20.828: INFO: Pod "client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042084003s Sep 21 11:18:22.835: INFO: Pod "client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049394835s STEP: Saw pod success Sep 21 11:18:22.836: INFO: Pod "client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7" satisfied condition "Succeeded or Failed" Sep 21 11:18:22.839: INFO: Trying to get logs from node kali-worker pod client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7 container test-container: STEP: delete the pod Sep 21 11:18:22.917: INFO: Waiting for pod client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7 to disappear Sep 21 11:18:22.922: INFO: Pod client-containers-261874eb-66d1-46c3-9d12-d22f573df7a7 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:18:22.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1188" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":2912,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:18:22.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7d8e123b-b47d-4780-afcb-6302e3bf25bc STEP: Creating a pod to test consume secrets Sep 21 11:18:23.057: INFO: Waiting up to 5m0s for pod "pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e" in namespace "secrets-9704" to be "Succeeded or Failed" Sep 21 11:18:23.072: INFO: Pod "pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.446611ms Sep 21 11:18:25.079: INFO: Pod "pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022393581s Sep 21 11:18:27.088: INFO: Pod "pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031207272s STEP: Saw pod success Sep 21 11:18:27.089: INFO: Pod "pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e" satisfied condition "Succeeded or Failed" Sep 21 11:18:27.094: INFO: Trying to get logs from node kali-worker pod pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e container secret-volume-test: STEP: delete the pod Sep 21 11:18:27.321: INFO: Waiting for pod pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e to disappear Sep 21 11:18:27.342: INFO: Pod pod-secrets-a60e91a2-57f0-421a-b0c3-d307a786470e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:18:27.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9704" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:18:27.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-5352 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5352 to expose endpoints map[] Sep 21 11:18:27.575: INFO: successfully validated that service multi-endpoint-test in namespace services-5352 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5352 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5352 to expose endpoints map[pod1:[100]] Sep 21 11:18:31.725: INFO: successfully validated that service multi-endpoint-test in namespace services-5352 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-5352 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5352 to expose endpoints map[pod1:[100] pod2:[101]] Sep 21 11:18:35.814: INFO: successfully validated that service multi-endpoint-test in namespace services-5352 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-5352 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5352 to expose endpoints map[pod2:[101]] Sep 21 11:18:35.882: INFO: successfully validated that service multi-endpoint-test in namespace services-5352 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-5352 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5352 to expose endpoints map[] Sep 21 11:18:35.940: INFO: successfully validated that service multi-endpoint-test in namespace services-5352 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:18:36.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5352" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.874 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":178,"skipped":2945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:18:36.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:18:50.595: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:18:52.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 11:18:54.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283930, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:18:57.783: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:18:57.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8848" for this suite. STEP: Destroying namespace "webhook-8848-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.768 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":179,"skipped":2984,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:18:58.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-f92c95d5-c083-4e1e-915a-49f391655fb1 STEP: Creating secret with name s-test-opt-upd-3ce5038d-ae6a-4dd0-9774-5d7429e2143d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f92c95d5-c083-4e1e-915a-49f391655fb1 STEP: Updating secret s-test-opt-upd-3ce5038d-ae6a-4dd0-9774-5d7429e2143d STEP: Creating secret with name s-test-opt-create-fc3f4b4a-f0b6-4ab3-b648-24d82b90fa78 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:19:08.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4443" for this suite. • [SLOW TEST:10.353 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:19:08.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 21 11:19:09.255: INFO: Waiting up to 5m0s for pod "downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476" in namespace "downward-api-3504" to be "Succeeded or Failed" Sep 21 11:19:09.918: INFO: Pod "downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476": Phase="Pending", Reason="", readiness=false. Elapsed: 663.292423ms Sep 21 11:19:11.925: INFO: Pod "downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670548666s Sep 21 11:19:13.932: INFO: Pod "downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.67730548s STEP: Saw pod success Sep 21 11:19:13.932: INFO: Pod "downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476" satisfied condition "Succeeded or Failed" Sep 21 11:19:13.937: INFO: Trying to get logs from node kali-worker pod downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476 container dapi-container: STEP: delete the pod Sep 21 11:19:14.066: INFO: Waiting for pod downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476 to disappear Sep 21 11:19:14.086: INFO: Pod downward-api-4c47f5ef-b11c-428c-acd7-4a838d359476 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:19:14.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3504" for this suite. • [SLOW TEST:6.028 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":181,"skipped":3004,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:19:14.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:19:14.561: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:19:18.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-705" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":3012,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:19:19.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 21 11:19:29.315: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 21 11:19:31.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283969, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283969, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283969, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736283969, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:19:34.413: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:19:34.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:19:35.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7657" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:16.797 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":183,"skipped":3015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:19:35.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 21 11:19:35.924: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:35.941: INFO: Number of nodes with available pods: 0 Sep 21 11:19:35.941: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:19:36.999: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:37.020: INFO: Number of nodes with available pods: 0 Sep 21 11:19:37.020: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:19:38.042: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:38.048: INFO: Number of nodes with available pods: 0 Sep 21 11:19:38.048: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:19:38.955: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:38.962: INFO: Number of nodes with available pods: 0 Sep 21 11:19:38.962: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:19:39.956: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:39.963: INFO: Number of nodes with available pods: 1 Sep 21 11:19:39.964: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:19:40.955: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:40.962: INFO: Number of nodes with available pods: 2 Sep 21 11:19:40.962: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 21 11:19:41.047: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:41.061: INFO: Number of nodes with available pods: 1 Sep 21 11:19:41.061: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:19:42.074: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:42.081: INFO: Number of nodes with available pods: 1 Sep 21 11:19:42.081: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:19:43.071: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:43.077: INFO: Number of nodes with available pods: 1 Sep 21 11:19:43.077: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:19:44.102: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:19:44.125: INFO: Number of nodes with available pods: 2 Sep 21 11:19:44.125: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6666, will wait for the garbage collector to delete the pods Sep 21 11:19:44.199: INFO: Deleting DaemonSet.extensions daemon-set took: 8.640704ms Sep 21 11:19:44.300: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.094491ms Sep 21 11:19:53.307: INFO: Number of nodes with available pods: 0 Sep 21 11:19:53.307: INFO: Number of running nodes: 0, number of available pods: 0 Sep 21 11:19:53.313: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6666/daemonsets","resourceVersion":"2066530"},"items":null} Sep 21 11:19:53.317: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6666/pods","resourceVersion":"2066530"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:19:53.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6666" for this suite. • [SLOW TEST:17.562 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":184,"skipped":3040,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:19:53.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 21 11:19:57.496: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6285 PodName:var-expansion-c682d6f1-50e0-4294-8eae-75e1a4bb4d6b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 11:19:57.496: INFO: >>> kubeConfig: /root/.kube/config I0921 11:19:57.609099 10 log.go:181] (0x8320a80) (0x8321340) Create stream I0921 11:19:57.609377 10 log.go:181] (0x8320a80) (0x8321340) Stream added, broadcasting: 1 I0921 11:19:57.612420 10 log.go:181] (0x8320a80) Reply frame received for 1 I0921 11:19:57.612563 10 log.go:181] (0x8320a80) (0xac96000) Create stream I0921 11:19:57.612626 10 log.go:181] (0x8320a80) (0xac96000) Stream added, broadcasting: 3 I0921 11:19:57.613593 10 log.go:181] (0x8320a80) Reply frame received for 3 I0921 11:19:57.613733 10 log.go:181] (0x8320a80) (0x6e94cb0) Create stream I0921 11:19:57.613813 10 log.go:181] (0x8320a80) (0x6e94cb0) Stream added, broadcasting: 5 I0921 11:19:57.614695 10 log.go:181] (0x8320a80) Reply frame received for 5 I0921 11:19:57.679105 10 log.go:181] (0x8320a80) Data frame received for 3 I0921 11:19:57.679325 10 log.go:181] (0xac96000) (3) Data frame handling I0921 11:19:57.679473 10 log.go:181] (0x8320a80) Data frame received for 5 I0921 11:19:57.679654 10 log.go:181] (0x6e94cb0) (5) Data frame handling I0921 11:19:57.681056 10 log.go:181] (0x8320a80) Data frame received for 1 I0921 11:19:57.681245 10 log.go:181] (0x8321340) (1) Data frame handling I0921 11:19:57.681442 10 log.go:181] (0x8321340) (1) Data frame sent I0921 11:19:57.681644 10 log.go:181] (0x8320a80) (0x8321340) Stream removed, broadcasting: 1 I0921 11:19:57.681896 10 log.go:181] (0x8320a80) Go away received I0921 11:19:57.682448 10 log.go:181] (0x8320a80) (0x8321340) Stream removed, broadcasting: 1 I0921 11:19:57.682607 10 log.go:181] (0x8320a80) (0xac96000) Stream removed, broadcasting: 3 I0921 11:19:57.682840 10 log.go:181] (0x8320a80) (0x6e94cb0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Sep 21 11:19:57.689: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6285 PodName:var-expansion-c682d6f1-50e0-4294-8eae-75e1a4bb4d6b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 11:19:57.689: INFO: >>> kubeConfig: /root/.kube/config I0921 11:19:57.800718 10 log.go:181] (0x7b78fc0) (0x7b79500) Create stream I0921 11:19:57.800843 10 log.go:181] (0x7b78fc0) (0x7b79500) Stream added, broadcasting: 1 I0921 11:19:57.804782 10 log.go:181] (0x7b78fc0) Reply frame received for 1 I0921 11:19:57.805035 10 log.go:181] (0x7b78fc0) (0xb15bf10) Create stream I0921 11:19:57.805155 10 log.go:181] (0x7b78fc0) (0xb15bf10) Stream added, broadcasting: 3 I0921 11:19:57.806915 10 log.go:181] (0x7b78fc0) Reply frame received for 3 I0921 11:19:57.807048 10 log.go:181] (0x7b78fc0) (0x91820e0) Create stream I0921 11:19:57.807120 10 log.go:181] (0x7b78fc0) (0x91820e0) Stream added, broadcasting: 5 I0921 11:19:57.808349 10 log.go:181] (0x7b78fc0) Reply frame received for 5 I0921 11:19:57.862233 10 log.go:181] (0x7b78fc0) Data frame received for 3 I0921 11:19:57.862458 10 log.go:181] (0xb15bf10) (3) Data frame handling I0921 11:19:57.862612 10 log.go:181] (0x7b78fc0) Data frame received for 5 I0921 11:19:57.862761 10 log.go:181] (0x91820e0) (5) Data frame handling I0921 11:19:57.863862 10 log.go:181] (0x7b78fc0) Data frame received for 1 I0921 11:19:57.864036 10 log.go:181] (0x7b79500) (1) Data frame handling I0921 11:19:57.864318 10 log.go:181] (0x7b79500) (1) Data frame sent I0921 11:19:57.864506 10 log.go:181] (0x7b78fc0) (0x7b79500) Stream removed, broadcasting: 1 I0921 11:19:57.864729 10 log.go:181] (0x7b78fc0) Go away received I0921 11:19:57.865266 10 log.go:181] (0x7b78fc0) (0x7b79500) Stream removed, broadcasting: 1 I0921 11:19:57.865463 10 log.go:181] (0x7b78fc0) (0xb15bf10) Stream removed, broadcasting: 3 I0921 11:19:57.865650 10 log.go:181] (0x7b78fc0) (0x91820e0) Stream removed, broadcasting: 5 STEP: updating the annotation value Sep 21 11:19:58.386: INFO: Successfully updated pod "var-expansion-c682d6f1-50e0-4294-8eae-75e1a4bb4d6b" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 21 11:19:58.392: INFO: Deleting pod "var-expansion-c682d6f1-50e0-4294-8eae-75e1a4bb4d6b" in namespace "var-expansion-6285" Sep 21 11:19:58.397: INFO: Wait up to 5m0s for pod "var-expansion-c682d6f1-50e0-4294-8eae-75e1a4bb4d6b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:20:44.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6285" for this suite. • [SLOW TEST:51.065 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":185,"skipped":3067,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:20:44.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:20:44.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d" in namespace "projected-7786" to be "Succeeded or Failed" Sep 21 11:20:44.592: INFO: Pod "downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.225337ms Sep 21 11:20:46.739: INFO: Pod "downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173713461s Sep 21 11:20:48.747: INFO: Pod "downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18241074s STEP: Saw pod success Sep 21 11:20:48.748: INFO: Pod "downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d" satisfied condition "Succeeded or Failed" Sep 21 11:20:48.754: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d container client-container: STEP: delete the pod Sep 21 11:20:48.802: INFO: Waiting for pod downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d to disappear Sep 21 11:20:48.850: INFO: Pod downwardapi-volume-8b0cafd4-7d1b-4ab7-8fe4-c620ca2efa4d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:20:48.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7786" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":3069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:20:48.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-23c7c488-13c6-47cf-a437-f29fb4fafe45 in namespace container-probe-1775 Sep 21 11:20:53.033: INFO: Started pod busybox-23c7c488-13c6-47cf-a437-f29fb4fafe45 in namespace container-probe-1775 STEP: checking the pod's current state and verifying that restartCount is present Sep 21 11:20:53.038: INFO: Initial restart count of pod busybox-23c7c488-13c6-47cf-a437-f29fb4fafe45 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:24:53.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1775" for this suite. • [SLOW TEST:244.446 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":187,"skipped":3100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:24:53.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4085 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-4085 Sep 21 11:24:53.480: INFO: Found 0 stateful pods, waiting for 1 Sep 21 11:25:03.488: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 21 11:25:03.521: INFO: Deleting all statefulset in ns statefulset-4085 Sep 21 11:25:03.544: INFO: Scaling statefulset ss to 0 Sep 21 11:25:23.686: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 11:25:23.691: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:23.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4085" for this suite. • [SLOW TEST:30.400 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":188,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:23.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 21 11:25:23.818: INFO: created test-pod-1 Sep 21 11:25:23.828: INFO: created test-pod-2 Sep 21 11:25:23.834: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:24.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1920" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":189,"skipped":3160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:24.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:40.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8024" for this suite. • [SLOW TEST:16.352 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":190,"skipped":3185,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:40.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:25:44.586: INFO: Waiting up to 5m0s for pod "client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5" in namespace "pods-7921" to be "Succeeded or Failed" Sep 21 11:25:44.607: INFO: Pod "client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.485189ms Sep 21 11:25:46.826: INFO: Pod "client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239767404s Sep 21 11:25:48.835: INFO: Pod "client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.248694873s STEP: Saw pod success Sep 21 11:25:48.835: INFO: Pod "client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5" satisfied condition "Succeeded or Failed" Sep 21 11:25:48.841: INFO: Trying to get logs from node kali-worker2 pod client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5 container env3cont: STEP: delete the pod Sep 21 11:25:48.966: INFO: Waiting for pod client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5 to disappear Sep 21 11:25:49.071: INFO: Pod client-envvars-4eef41fc-c1ec-4895-b6f8-e7a83efff0f5 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:49.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7921" for this suite. • [SLOW TEST:8.651 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":3186,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:49.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 21 11:25:49.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config api-versions' Sep 21 11:25:50.389: INFO: stderr: "" Sep 21 11:25:50.389: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:50.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3423" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":192,"skipped":3201,"failed":0} ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:50.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:50.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9818" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":193,"skipped":3201,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:50.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3949.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3949.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3949.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3949.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3949.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3949.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 11:25:56.807: INFO: DNS probes using dns-3949/dns-test-b25919a6-9a92-46f8-a0fe-0dd5e1846a2a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:25:56.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3949" for this suite. • [SLOW TEST:6.368 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":194,"skipped":3202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:25:56.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:26:03.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6917" for this suite. STEP: Destroying namespace "nsdeletetest-1960" for this suite. Sep 21 11:26:03.534: INFO: Namespace nsdeletetest-1960 was already deleted STEP: Destroying namespace "nsdeletetest-7315" for this suite. • [SLOW TEST:6.590 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":195,"skipped":3228,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:26:03.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-7843 STEP: creating replication controller nodeport-test in namespace services-7843 I0921 11:26:03.729110 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7843, replica count: 2 I0921 11:26:06.780735 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:26:09.781792 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 11:26:09.782: INFO: Creating new exec pod Sep 21 11:26:14.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpods9w2g -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 21 11:26:19.910: INFO: stderr: "I0921 11:26:19.788692 2813 log.go:181] (0x264d2d0) (0x264d5e0) Create stream\nI0921 11:26:19.790612 2813 log.go:181] (0x264d2d0) (0x264d5e0) Stream added, broadcasting: 1\nI0921 11:26:19.802268 2813 log.go:181] (0x264d2d0) Reply frame received for 1\nI0921 11:26:19.803255 2813 log.go:181] (0x264d2d0) (0x31bcf50) Create stream\nI0921 11:26:19.803382 2813 log.go:181] (0x264d2d0) (0x31bcf50) Stream added, broadcasting: 3\nI0921 11:26:19.805427 2813 log.go:181] (0x264d2d0) Reply frame received for 3\nI0921 11:26:19.805640 2813 log.go:181] (0x264d2d0) (0x2d42070) Create stream\nI0921 11:26:19.805713 2813 log.go:181] (0x264d2d0) (0x2d42070) Stream added, broadcasting: 5\nI0921 11:26:19.807523 2813 log.go:181] (0x264d2d0) Reply frame received for 5\nI0921 11:26:19.893770 2813 log.go:181] (0x264d2d0) Data frame received for 5\nI0921 11:26:19.894259 2813 log.go:181] (0x264d2d0) Data frame received for 3\nI0921 11:26:19.894418 2813 log.go:181] (0x31bcf50) (3) Data frame handling\nI0921 11:26:19.894665 2813 log.go:181] (0x2d42070) (5) Data frame handling\nI0921 11:26:19.895288 2813 log.go:181] (0x264d2d0) Data frame received for 1\nI0921 11:26:19.895355 2813 log.go:181] (0x264d5e0) (1) Data frame handling\nI0921 11:26:19.895755 2813 log.go:181] (0x264d5e0) (1) Data frame sent\nI0921 11:26:19.896019 2813 log.go:181] (0x2d42070) (5) Data frame sent\nI0921 11:26:19.896312 2813 log.go:181] (0x264d2d0) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nI0921 11:26:19.896504 2813 log.go:181] (0x2d42070) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0921 11:26:19.897682 2813 log.go:181] (0x2d42070) (5) Data frame sent\nI0921 11:26:19.897808 2813 log.go:181] (0x264d2d0) Data frame received for 5\nI0921 11:26:19.897875 2813 log.go:181] (0x2d42070) (5) Data frame handling\nI0921 11:26:19.898645 2813 log.go:181] (0x264d2d0) (0x264d5e0) Stream removed, broadcasting: 1\nI0921 11:26:19.900877 2813 log.go:181] (0x264d2d0) Go away received\nI0921 11:26:19.902665 2813 log.go:181] (0x264d2d0) (0x264d5e0) Stream removed, broadcasting: 1\nI0921 11:26:19.902829 2813 log.go:181] (0x264d2d0) (0x31bcf50) Stream removed, broadcasting: 3\nI0921 11:26:19.902989 2813 log.go:181] (0x264d2d0) (0x2d42070) Stream removed, broadcasting: 5\n" Sep 21 11:26:19.911: INFO: stdout: "" Sep 21 11:26:19.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpods9w2g -- /bin/sh -x -c nc -zv -t -w 2 10.96.172.147 80' Sep 21 11:26:21.448: INFO: stderr: "I0921 11:26:21.323652 2834 log.go:181] (0x2566000) (0x2566070) Create stream\nI0921 11:26:21.326619 2834 log.go:181] (0x2566000) (0x2566070) Stream added, broadcasting: 1\nI0921 11:26:21.346960 2834 log.go:181] (0x2566000) Reply frame received for 1\nI0921 11:26:21.347497 2834 log.go:181] (0x2566000) (0x2f6c070) Create stream\nI0921 11:26:21.347581 2834 log.go:181] (0x2566000) (0x2f6c070) Stream added, broadcasting: 3\nI0921 11:26:21.348795 2834 log.go:181] (0x2566000) Reply frame received for 3\nI0921 11:26:21.349019 2834 log.go:181] (0x2566000) (0x2f0e0e0) Create stream\nI0921 11:26:21.349084 2834 log.go:181] (0x2566000) (0x2f0e0e0) Stream added, broadcasting: 5\nI0921 11:26:21.350084 2834 log.go:181] (0x2566000) Reply frame received for 5\nI0921 11:26:21.430067 2834 log.go:181] (0x2566000) Data frame received for 5\nI0921 11:26:21.430383 2834 log.go:181] (0x2566000) Data frame received for 3\nI0921 11:26:21.430529 2834 log.go:181] (0x2f6c070) (3) Data frame handling\nI0921 11:26:21.430861 2834 log.go:181] (0x2566000) Data frame received for 1\nI0921 11:26:21.431056 2834 log.go:181] (0x2566070) (1) Data frame handling\nI0921 11:26:21.431418 2834 log.go:181] (0x2f0e0e0) (5) Data frame handling\nI0921 11:26:21.432224 2834 log.go:181] (0x2566070) (1) Data frame sent\nI0921 11:26:21.432356 2834 log.go:181] (0x2f0e0e0) (5) Data frame sent\nI0921 11:26:21.432676 2834 log.go:181] (0x2566000) Data frame received for 5\nI0921 11:26:21.432784 2834 log.go:181] (0x2f0e0e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.172.147 80\nConnection to 10.96.172.147 80 port [tcp/http] succeeded!\nI0921 11:26:21.433785 2834 log.go:181] (0x2566000) (0x2566070) Stream removed, broadcasting: 1\nI0921 11:26:21.437315 2834 log.go:181] (0x2566000) Go away received\nI0921 11:26:21.439630 2834 log.go:181] (0x2566000) (0x2566070) Stream removed, broadcasting: 1\nI0921 11:26:21.440032 2834 log.go:181] (0x2566000) (0x2f6c070) Stream removed, broadcasting: 3\nI0921 11:26:21.440378 2834 log.go:181] (0x2566000) (0x2f0e0e0) Stream removed, broadcasting: 5\n" Sep 21 11:26:21.449: INFO: stdout: "" Sep 21 11:26:21.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpods9w2g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30757' Sep 21 11:26:22.941: INFO: stderr: "I0921 11:26:22.825653 2854 log.go:181] (0x24e2000) (0x24e2070) Create stream\nI0921 11:26:22.827510 2854 log.go:181] (0x24e2000) (0x24e2070) Stream added, broadcasting: 1\nI0921 11:26:22.837018 2854 log.go:181] (0x24e2000) Reply frame received for 1\nI0921 11:26:22.837516 2854 log.go:181] (0x24e2000) (0x2c6c460) Create stream\nI0921 11:26:22.837579 2854 log.go:181] (0x24e2000) (0x2c6c460) Stream added, broadcasting: 3\nI0921 11:26:22.839410 2854 log.go:181] (0x24e2000) Reply frame received for 3\nI0921 11:26:22.839807 2854 log.go:181] (0x24e2000) (0x2c6c5b0) Create stream\nI0921 11:26:22.839877 2854 log.go:181] (0x24e2000) (0x2c6c5b0) Stream added, broadcasting: 5\nI0921 11:26:22.841332 2854 log.go:181] (0x24e2000) Reply frame received for 5\nI0921 11:26:22.923395 2854 log.go:181] (0x24e2000) Data frame received for 3\nI0921 11:26:22.923749 2854 log.go:181] (0x24e2000) Data frame received for 5\nI0921 11:26:22.923935 2854 log.go:181] (0x2c6c5b0) (5) Data frame handling\nI0921 11:26:22.924469 2854 log.go:181] (0x24e2000) Data frame received for 1\nI0921 11:26:22.924573 2854 log.go:181] (0x24e2070) (1) Data frame handling\nI0921 11:26:22.924739 2854 log.go:181] (0x2c6c460) (3) Data frame handling\nI0921 11:26:22.925353 2854 log.go:181] (0x24e2070) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 30757\nConnection to 172.18.0.11 30757 port [tcp/30757] succeeded!\nI0921 11:26:22.926598 2854 log.go:181] (0x2c6c5b0) (5) Data frame sent\nI0921 11:26:22.926768 2854 log.go:181] (0x24e2000) Data frame received for 5\nI0921 11:26:22.927089 2854 log.go:181] (0x24e2000) (0x24e2070) Stream removed, broadcasting: 1\nI0921 11:26:22.928788 2854 log.go:181] (0x2c6c5b0) (5) Data frame handling\nI0921 11:26:22.929189 2854 log.go:181] (0x24e2000) Go away received\nI0921 11:26:22.931900 2854 log.go:181] (0x24e2000) (0x24e2070) Stream removed, broadcasting: 1\nI0921 11:26:22.932109 2854 log.go:181] (0x24e2000) (0x2c6c460) Stream removed, broadcasting: 3\nI0921 11:26:22.932357 2854 log.go:181] (0x24e2000) (0x2c6c5b0) Stream removed, broadcasting: 5\n" Sep 21 11:26:22.942: INFO: stdout: "" Sep 21 11:26:22.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpods9w2g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30757' Sep 21 11:26:24.423: INFO: stderr: "I0921 11:26:24.301965 2875 log.go:181] (0x2951730) (0x29517a0) Create stream\nI0921 11:26:24.305667 2875 log.go:181] (0x2951730) (0x29517a0) Stream added, broadcasting: 1\nI0921 11:26:24.316055 2875 log.go:181] (0x2951730) Reply frame received for 1\nI0921 11:26:24.316575 2875 log.go:181] (0x2951730) (0x265a070) Create stream\nI0921 11:26:24.316643 2875 log.go:181] (0x2951730) (0x265a070) Stream added, broadcasting: 3\nI0921 11:26:24.317926 2875 log.go:181] (0x2951730) Reply frame received for 3\nI0921 11:26:24.318129 2875 log.go:181] (0x2951730) (0x24c6070) Create stream\nI0921 11:26:24.318195 2875 log.go:181] (0x2951730) (0x24c6070) Stream added, broadcasting: 5\nI0921 11:26:24.319399 2875 log.go:181] (0x2951730) Reply frame received for 5\nI0921 11:26:24.405892 2875 log.go:181] (0x2951730) Data frame received for 5\nI0921 11:26:24.406123 2875 log.go:181] (0x2951730) Data frame received for 1\nI0921 11:26:24.406833 2875 log.go:181] (0x29517a0) (1) Data frame handling\nI0921 11:26:24.407035 2875 log.go:181] (0x2951730) Data frame received for 3\nI0921 11:26:24.407349 2875 log.go:181] (0x265a070) (3) Data frame handling\nI0921 11:26:24.407532 2875 log.go:181] (0x24c6070) (5) Data frame handling\nI0921 11:26:24.408587 2875 log.go:181] (0x29517a0) (1) Data frame sent\nI0921 11:26:24.408852 2875 log.go:181] (0x24c6070) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30757\nConnection to 172.18.0.12 30757 port [tcp/30757] succeeded!\nI0921 11:26:24.409728 2875 log.go:181] (0x2951730) Data frame received for 5\nI0921 11:26:24.409842 2875 log.go:181] (0x24c6070) (5) Data frame handling\nI0921 11:26:24.411672 2875 log.go:181] (0x2951730) (0x29517a0) Stream removed, broadcasting: 1\nI0921 11:26:24.413109 2875 log.go:181] (0x2951730) Go away received\nI0921 11:26:24.415596 2875 log.go:181] (0x2951730) (0x29517a0) Stream removed, broadcasting: 1\nI0921 11:26:24.415779 2875 log.go:181] (0x2951730) (0x265a070) Stream removed, broadcasting: 3\nI0921 11:26:24.415930 2875 log.go:181] (0x2951730) (0x24c6070) Stream removed, broadcasting: 5\n" Sep 21 11:26:24.424: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:26:24.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7843" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.903 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":196,"skipped":3235,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:26:24.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 21 11:26:24.556: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 21 11:26:32.955: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 21 11:26:35.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284392, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284392, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284393, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284392, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 11:26:37.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284392, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284392, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284393, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284392, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 11:26:40.116: INFO: Waited 731.966597ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:26:40.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9036" for this suite. • [SLOW TEST:16.469 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":197,"skipped":3256,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:26:40.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:26:41.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7171" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":198,"skipped":3261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:26:41.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 21 11:26:41.556: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:28:23.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7532" for this suite. • [SLOW TEST:102.316 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":199,"skipped":3296,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:28:23.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7374 STEP: creating service affinity-nodeport-transition in namespace services-7374 STEP: creating replication controller affinity-nodeport-transition in namespace services-7374 I0921 11:28:23.803268 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-7374, replica count: 3 I0921 11:28:26.854922 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:28:29.856312 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 11:28:29.879: INFO: Creating new exec pod Sep 21 11:28:36.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7374 execpod-affinityhq8sx -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 21 11:28:38.392: INFO: stderr: "I0921 11:28:38.257681 2895 log.go:181] (0x2682460) (0x2683730) Create stream\nI0921 11:28:38.259594 2895 log.go:181] (0x2682460) (0x2683730) Stream added, broadcasting: 1\nI0921 11:28:38.268842 2895 log.go:181] (0x2682460) Reply frame received for 1\nI0921 11:28:38.269647 2895 log.go:181] (0x2682460) (0x2804070) Create stream\nI0921 11:28:38.269752 2895 log.go:181] (0x2682460) (0x2804070) Stream added, broadcasting: 3\nI0921 11:28:38.271653 2895 log.go:181] (0x2682460) Reply frame received for 3\nI0921 11:28:38.272299 2895 log.go:181] (0x2682460) (0x2683a40) Create stream\nI0921 11:28:38.272426 2895 log.go:181] (0x2682460) (0x2683a40) Stream added, broadcasting: 5\nI0921 11:28:38.274295 2895 log.go:181] (0x2682460) Reply frame received for 5\nI0921 11:28:38.358097 2895 log.go:181] (0x2682460) Data frame received for 5\nI0921 11:28:38.359132 2895 log.go:181] (0x2683a40) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0921 11:28:38.361956 2895 log.go:181] (0x2683a40) (5) Data frame sent\nI0921 11:28:38.367480 2895 log.go:181] (0x2682460) Data frame received for 5\nI0921 11:28:38.367635 2895 log.go:181] (0x2683a40) (5) Data frame handling\nI0921 11:28:38.372009 2895 log.go:181] (0x2682460) Data frame received for 1\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0921 11:28:38.377605 2895 log.go:181] (0x2682460) Data frame received for 3\nI0921 11:28:38.377708 2895 log.go:181] (0x2804070) (3) Data frame handling\nI0921 11:28:38.377802 2895 log.go:181] (0x2683730) (1) Data frame handling\nI0921 11:28:38.377975 2895 log.go:181] (0x2683730) (1) Data frame sent\nI0921 11:28:38.378194 2895 log.go:181] (0x2683a40) (5) Data frame sent\nI0921 11:28:38.378331 2895 log.go:181] (0x2682460) Data frame received for 5\nI0921 11:28:38.378421 2895 log.go:181] (0x2683a40) (5) Data frame handling\nI0921 11:28:38.378988 2895 log.go:181] (0x2682460) (0x2683730) Stream removed, broadcasting: 1\nI0921 11:28:38.380947 2895 log.go:181] (0x2682460) Go away received\nI0921 11:28:38.383956 2895 log.go:181] (0x2682460) (0x2683730) Stream removed, broadcasting: 1\nI0921 11:28:38.384483 2895 log.go:181] (0x2682460) (0x2804070) Stream removed, broadcasting: 3\nI0921 11:28:38.384658 2895 log.go:181] (0x2682460) (0x2683a40) Stream removed, broadcasting: 5\n" Sep 21 11:28:38.393: INFO: stdout: "" Sep 21 11:28:38.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7374 execpod-affinityhq8sx -- /bin/sh -x -c nc -zv -t -w 2 10.109.19.9 80' Sep 21 11:28:39.973: INFO: stderr: "I0921 11:28:39.837649 2916 log.go:181] (0x257c4d0) (0x257c540) Create stream\nI0921 11:28:39.842559 2916 log.go:181] (0x257c4d0) (0x257c540) Stream added, broadcasting: 1\nI0921 11:28:39.865668 2916 log.go:181] (0x257c4d0) Reply frame received for 1\nI0921 11:28:39.866614 2916 log.go:181] (0x257c4d0) (0x26199d0) Create stream\nI0921 11:28:39.866732 2916 log.go:181] (0x257c4d0) (0x26199d0) Stream added, broadcasting: 3\nI0921 11:28:39.868840 2916 log.go:181] (0x257c4d0) Reply frame received for 3\nI0921 11:28:39.869100 2916 log.go:181] (0x257c4d0) (0x257c070) Create stream\nI0921 11:28:39.869180 2916 log.go:181] (0x257c4d0) (0x257c070) Stream added, broadcasting: 5\nI0921 11:28:39.870541 2916 log.go:181] (0x257c4d0) Reply frame received for 5\nI0921 11:28:39.951088 2916 log.go:181] (0x257c4d0) Data frame received for 5\nI0921 11:28:39.951521 2916 log.go:181] (0x257c4d0) Data frame received for 1\nI0921 11:28:39.951760 2916 log.go:181] (0x257c540) (1) Data frame handling\nI0921 11:28:39.951994 2916 log.go:181] (0x257c4d0) Data frame received for 3\nI0921 11:28:39.952094 2916 log.go:181] (0x26199d0) (3) Data frame handling\nI0921 11:28:39.952270 2916 log.go:181] (0x257c070) (5) Data frame handling\nI0921 11:28:39.953333 2916 log.go:181] (0x257c540) (1) Data frame sent\nI0921 11:28:39.954488 2916 log.go:181] (0x257c070) (5) Data frame sent\nI0921 11:28:39.954623 2916 log.go:181] (0x257c4d0) Data frame received for 5\nI0921 11:28:39.954730 2916 log.go:181] (0x257c070) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.19.9 80\nConnection to 10.109.19.9 80 port [tcp/http] succeeded!\nI0921 11:28:39.957338 2916 log.go:181] (0x257c4d0) (0x257c540) Stream removed, broadcasting: 1\nI0921 11:28:39.958294 2916 log.go:181] (0x257c4d0) Go away received\nI0921 11:28:39.962277 2916 log.go:181] (0x257c4d0) (0x257c540) Stream removed, broadcasting: 1\nI0921 11:28:39.962551 2916 log.go:181] (0x257c4d0) (0x26199d0) Stream removed, broadcasting: 3\nI0921 11:28:39.962783 2916 log.go:181] (0x257c4d0) (0x257c070) Stream removed, broadcasting: 5\n" Sep 21 11:28:39.973: INFO: stdout: "" Sep 21 11:28:39.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7374 execpod-affinityhq8sx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31799' Sep 21 11:28:41.549: INFO: stderr: "I0921 11:28:41.402347 2936 log.go:181] (0x2d27ab0) (0x2d27b20) Create stream\nI0921 11:28:41.406292 2936 log.go:181] (0x2d27ab0) (0x2d27b20) Stream added, broadcasting: 1\nI0921 11:28:41.426601 2936 log.go:181] (0x2d27ab0) Reply frame received for 1\nI0921 11:28:41.427055 2936 log.go:181] (0x2d27ab0) (0x2d26070) Create stream\nI0921 11:28:41.427116 2936 log.go:181] (0x2d27ab0) (0x2d26070) Stream added, broadcasting: 3\nI0921 11:28:41.428689 2936 log.go:181] (0x2d27ab0) Reply frame received for 3\nI0921 11:28:41.428915 2936 log.go:181] (0x2d27ab0) (0x24e8070) Create stream\nI0921 11:28:41.428974 2936 log.go:181] (0x2d27ab0) (0x24e8070) Stream added, broadcasting: 5\nI0921 11:28:41.430158 2936 log.go:181] (0x2d27ab0) Reply frame received for 5\nI0921 11:28:41.529382 2936 log.go:181] (0x2d27ab0) Data frame received for 5\nI0921 11:28:41.529896 2936 log.go:181] (0x2d27ab0) Data frame received for 3\nI0921 11:28:41.530337 2936 log.go:181] (0x2d26070) (3) Data frame handling\nI0921 11:28:41.530483 2936 log.go:181] (0x2d27ab0) Data frame received for 1\nI0921 11:28:41.530721 2936 log.go:181] (0x2d27b20) (1) Data frame handling\nI0921 11:28:41.530984 2936 log.go:181] (0x24e8070) (5) Data frame handling\nI0921 11:28:41.533042 2936 log.go:181] (0x2d27b20) (1) Data frame sent\nI0921 11:28:41.533340 2936 log.go:181] (0x24e8070) (5) Data frame sent\nI0921 11:28:41.533481 2936 log.go:181] (0x2d27ab0) Data frame received for 5\nI0921 11:28:41.533591 2936 log.go:181] (0x24e8070) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31799\nConnection to 172.18.0.11 31799 port [tcp/31799] succeeded!\nI0921 11:28:41.534277 2936 log.go:181] (0x2d27ab0) (0x2d27b20) Stream removed, broadcasting: 1\nI0921 11:28:41.537145 2936 log.go:181] (0x2d27ab0) Go away received\nI0921 11:28:41.540039 2936 log.go:181] (0x2d27ab0) (0x2d27b20) Stream removed, broadcasting: 1\nI0921 11:28:41.540527 2936 log.go:181] (0x2d27ab0) (0x2d26070) Stream removed, broadcasting: 3\nI0921 11:28:41.540753 2936 log.go:181] (0x2d27ab0) (0x24e8070) Stream removed, broadcasting: 5\n" Sep 21 11:28:41.550: INFO: stdout: "" Sep 21 11:28:41.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7374 execpod-affinityhq8sx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31799' Sep 21 11:28:43.091: INFO: stderr: "I0921 11:28:42.971008 2956 log.go:181] (0x2804310) (0x2804620) Create stream\nI0921 11:28:42.974160 2956 log.go:181] (0x2804310) (0x2804620) Stream added, broadcasting: 1\nI0921 11:28:42.983167 2956 log.go:181] (0x2804310) Reply frame received for 1\nI0921 11:28:42.983829 2956 log.go:181] (0x2804310) (0x2e50070) Create stream\nI0921 11:28:42.983941 2956 log.go:181] (0x2804310) (0x2e50070) Stream added, broadcasting: 3\nI0921 11:28:42.989557 2956 log.go:181] (0x2804310) Reply frame received for 3\nI0921 11:28:42.989825 2956 log.go:181] (0x2804310) (0x2805110) Create stream\nI0921 11:28:42.989898 2956 log.go:181] (0x2804310) (0x2805110) Stream added, broadcasting: 5\nI0921 11:28:42.991103 2956 log.go:181] (0x2804310) Reply frame received for 5\nI0921 11:28:43.073889 2956 log.go:181] (0x2804310) Data frame received for 5\nI0921 11:28:43.074352 2956 log.go:181] (0x2804310) Data frame received for 3\nI0921 11:28:43.074657 2956 log.go:181] (0x2e50070) (3) Data frame handling\nI0921 11:28:43.075372 2956 log.go:181] (0x2804310) Data frame received for 1\nI0921 11:28:43.075625 2956 log.go:181] (0x2804620) (1) Data frame handling\nI0921 11:28:43.075766 2956 log.go:181] (0x2805110) (5) Data frame handling\nI0921 11:28:43.077010 2956 log.go:181] (0x2805110) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31799\nConnection to 172.18.0.12 31799 port [tcp/31799] succeeded!\nI0921 11:28:43.077499 2956 log.go:181] (0x2804620) (1) Data frame sent\nI0921 11:28:43.077615 2956 log.go:181] (0x2804310) Data frame received for 5\nI0921 11:28:43.077717 2956 log.go:181] (0x2805110) (5) Data frame handling\nI0921 11:28:43.078463 2956 log.go:181] (0x2804310) (0x2804620) Stream removed, broadcasting: 1\nI0921 11:28:43.078943 2956 log.go:181] (0x2804310) Go away received\nI0921 11:28:43.082346 2956 log.go:181] (0x2804310) (0x2804620) Stream removed, broadcasting: 1\nI0921 11:28:43.082679 2956 log.go:181] (0x2804310) (0x2e50070) Stream removed, broadcasting: 3\nI0921 11:28:43.082939 2956 log.go:181] (0x2804310) (0x2805110) Stream removed, broadcasting: 5\n" Sep 21 11:28:43.092: INFO: stdout: "" Sep 21 11:28:43.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7374 execpod-affinityhq8sx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31799/ ; done' Sep 21 11:28:44.773: INFO: stderr: "I0921 11:28:44.560242 2976 log.go:181] (0x267a700) (0x267a930) Create stream\nI0921 11:28:44.562248 2976 log.go:181] (0x267a700) (0x267a930) Stream added, broadcasting: 1\nI0921 11:28:44.572727 2976 log.go:181] (0x267a700) Reply frame received for 1\nI0921 11:28:44.573699 2976 log.go:181] (0x267a700) (0x2512700) Create stream\nI0921 11:28:44.573840 2976 log.go:181] (0x267a700) (0x2512700) Stream added, broadcasting: 3\nI0921 11:28:44.575620 2976 log.go:181] (0x267a700) Reply frame received for 3\nI0921 11:28:44.575798 2976 log.go:181] (0x267a700) (0x267b110) Create stream\nI0921 11:28:44.575848 2976 log.go:181] (0x267a700) (0x267b110) Stream added, broadcasting: 5\nI0921 11:28:44.577455 2976 log.go:181] (0x267a700) Reply frame received for 5\nI0921 11:28:44.663882 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.664552 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.664735 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.664848 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.665834 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.665966 2976 log.go:181] (0x2512700) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.668784 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.668957 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.669104 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.669308 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.669507 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.669680 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.669826 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.670034 2976 log.go:181] (0x267b110) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.670184 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.676048 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.676345 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.676447 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.676579 2976 log.go:181] (0x267b110) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.676722 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.676875 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.676978 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.677125 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.677255 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.682338 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.682462 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.682622 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.683262 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.683420 2976 log.go:181] (0x267b110) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.683512 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.683739 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.683839 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.683964 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.691023 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.691117 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.691212 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.691435 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.691562 2976 log.go:181] (0x267b110) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.691641 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.691947 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.692064 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.692272 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.696702 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.696801 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.696916 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.697523 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.697609 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.697709 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.697828 2976 log.go:181] (0x267b110) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.697911 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.698027 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.701148 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.701241 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.701357 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.702011 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.702171 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.702296 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.702429 2976 log.go:181] (0x267b110) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.702523 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.702648 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.707559 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.707671 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.707833 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.708091 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.708254 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.708376 2976 log.go:181] (0x267b110) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.708466 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.708543 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.708645 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.714217 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.714358 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.714489 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.715356 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.715574 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.715749 2976 log.go:181] (0x267b110) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.715903 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.716117 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.716391 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.718698 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.718856 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.719043 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.719740 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.719895 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.720230 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.720413 2976 log.go:181] (0x2512700) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.720645 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.720824 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.726334 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.726453 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.726599 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.727119 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.727287 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.727450 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.727544 2976 log.go:181] (0x267a700) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.727632 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.727724 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.732905 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.733049 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.733250 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.733420 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.733506 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.733589 2976 log.go:181] (0x267b110) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.733712 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.733804 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.733921 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.739970 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.740052 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.740397 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.740628 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.740791 2976 log.go:181] (0x267b110) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.740901 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.741028 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.741118 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.741269 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.744122 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.744294 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.744385 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.745005 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.745118 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.745228 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.745323 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.745418 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.745539 2976 log.go:181] (0x267b110) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.748244 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.748312 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.748382 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.748723 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.748793 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.748869 2976 log.go:181] (0x267b110) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.749139 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.749245 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.749344 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.752299 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.752396 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.752508 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.752722 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.752781 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.752838 2976 log.go:181] (0x267b110) (5) Data frame sent\nI0921 11:28:44.752899 2976 log.go:181] (0x267a700) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:44.752950 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.753006 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.756000 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.756084 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.756252 2976 log.go:181] (0x2512700) (3) Data frame sent\nI0921 11:28:44.756676 2976 log.go:181] (0x267a700) Data frame received for 3\nI0921 11:28:44.756868 2976 log.go:181] (0x2512700) (3) Data frame handling\nI0921 11:28:44.757102 2976 log.go:181] (0x267a700) Data frame received for 5\nI0921 11:28:44.757224 2976 log.go:181] (0x267b110) (5) Data frame handling\nI0921 11:28:44.758502 2976 log.go:181] (0x267a700) Data frame received for 1\nI0921 11:28:44.758625 2976 log.go:181] (0x267a930) (1) Data frame handling\nI0921 11:28:44.758811 2976 log.go:181] (0x267a930) (1) Data frame sent\nI0921 11:28:44.759864 2976 log.go:181] (0x267a700) (0x267a930) Stream removed, broadcasting: 1\nI0921 11:28:44.761154 2976 log.go:181] (0x267a700) Go away received\nI0921 11:28:44.765333 2976 log.go:181] (0x267a700) (0x267a930) Stream removed, broadcasting: 1\nI0921 11:28:44.765694 2976 log.go:181] (0x267a700) (0x2512700) Stream removed, broadcasting: 3\nI0921 11:28:44.765999 2976 log.go:181] (0x267a700) (0x267b110) Stream removed, broadcasting: 5\n" Sep 21 11:28:44.777: INFO: stdout: "\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-qz6vh\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-qz6vh\naffinity-nodeport-transition-qz6vh\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-qz6vh\naffinity-nodeport-transition-qz6vh\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-qz6vh\naffinity-nodeport-transition-cp8gl\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q" Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-qz6vh Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-qz6vh Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-qz6vh Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-qz6vh Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-qz6vh Sep 21 11:28:44.777: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.778: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.778: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:44.778: INFO: Received response from host: affinity-nodeport-transition-qz6vh Sep 21 11:28:44.778: INFO: Received response from host: affinity-nodeport-transition-cp8gl Sep 21 11:28:44.778: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:44.778: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:44.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7374 execpod-affinityhq8sx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31799/ ; done' Sep 21 11:28:46.406: INFO: stderr: "I0921 11:28:46.180969 2996 log.go:181] (0x28cc000) (0x28cc070) Create stream\nI0921 11:28:46.184947 2996 log.go:181] (0x28cc000) (0x28cc070) Stream added, broadcasting: 1\nI0921 11:28:46.196431 2996 log.go:181] (0x28cc000) Reply frame received for 1\nI0921 11:28:46.196940 2996 log.go:181] (0x28cc000) (0x2950070) Create stream\nI0921 11:28:46.197007 2996 log.go:181] (0x28cc000) (0x2950070) Stream added, broadcasting: 3\nI0921 11:28:46.198464 2996 log.go:181] (0x28cc000) Reply frame received for 3\nI0921 11:28:46.198826 2996 log.go:181] (0x28cc000) (0x2950230) Create stream\nI0921 11:28:46.198928 2996 log.go:181] (0x28cc000) (0x2950230) Stream added, broadcasting: 5\nI0921 11:28:46.200898 2996 log.go:181] (0x28cc000) Reply frame received for 5\nI0921 11:28:46.299041 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.299276 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.299414 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.299592 2996 log.go:181] (0x2950070) (3) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.299811 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.300037 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.302935 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.303048 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.303180 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.303537 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.303658 2996 log.go:181] (0x2950230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.303806 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.303993 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.304109 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.304326 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.310260 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.310361 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.310497 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.310904 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.311050 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.311164 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\nI0921 11:28:46.311337 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.311502 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.311667 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.311777 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.311898 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.312022 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.315070 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.315169 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.315296 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.315737 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.315911 2996 log.go:181] (0x2950230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.316052 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.316280 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.316436 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.316563 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.321447 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.321594 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.321704 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.322282 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.322425 2996 log.go:181] (0x2950230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.322546 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.322674 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.322808 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.322939 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.327643 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.327762 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.327878 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.328433 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.328530 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.328668 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.328781 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.328893 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.329005 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.333980 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.334090 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.334216 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.334599 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.334708 2996 log.go:181] (0x2950230) (5) Data frame handling\n+ echo\n+ curl -q -sI0921 11:28:46.334815 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.335033 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.335239 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.335417 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.335552 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.335653 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.335777 2996 log.go:181] (0x2950230) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.340279 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.340469 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.340655 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.340970 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.341065 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.341178 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.341285 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.341384 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.341496 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ I0921 11:28:46.341599 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.341698 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.341813 2996 log.go:181] (0x2950230) (5) Data frame sent\ncurl -q -s --connect-timeout 2I0921 11:28:46.341908 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.342020 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.342118 2996 log.go:181] (0x2950230) (5) Data frame sent\n http://172.18.0.11:31799/\nI0921 11:28:46.347237 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.347344 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.347459 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.347964 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.348097 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.348299 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.348456 2996 log.go:181] (0x2950230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.348574 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.348680 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.352076 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.352306 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.352427 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.352586 2996 log.go:181] (0x2950230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.352734 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.352893 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.352997 2996 log.go:181] (0x2950230) (5) Data frame sent\nI0921 11:28:46.353149 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.353306 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.357420 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.357519 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.357640 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.358294 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.358431 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.358553 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.358651 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.358746 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.358861 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.363974 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.364076 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.364266 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.364784 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.364905 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.365015 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.365112 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.365197 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.365308 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.368426 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.368557 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.368675 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.368945 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.369095 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.369192 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.369331 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.369452 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.369579 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.373855 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.374012 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.374148 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.374273 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.374378 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.374509 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.374618 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.374721 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.374842 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.379784 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.379881 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.379991 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.380953 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.381126 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.381254 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.381501 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.381634 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.381787 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.383761 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.383848 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.383947 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.384395 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.384571 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.384709 2996 log.go:181] (0x2950230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31799/\nI0921 11:28:46.384859 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.384987 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.385107 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.390331 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.390489 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.390678 2996 log.go:181] (0x2950070) (3) Data frame sent\nI0921 11:28:46.391471 2996 log.go:181] (0x28cc000) Data frame received for 5\nI0921 11:28:46.391627 2996 log.go:181] (0x2950230) (5) Data frame handling\nI0921 11:28:46.391854 2996 log.go:181] (0x28cc000) Data frame received for 3\nI0921 11:28:46.391960 2996 log.go:181] (0x2950070) (3) Data frame handling\nI0921 11:28:46.393856 2996 log.go:181] (0x28cc000) Data frame received for 1\nI0921 11:28:46.393935 2996 log.go:181] (0x28cc070) (1) Data frame handling\nI0921 11:28:46.394012 2996 log.go:181] (0x28cc070) (1) Data frame sent\nI0921 11:28:46.394352 2996 log.go:181] (0x28cc000) (0x28cc070) Stream removed, broadcasting: 1\nI0921 11:28:46.396109 2996 log.go:181] (0x28cc000) Go away received\nI0921 11:28:46.398102 2996 log.go:181] (0x28cc000) (0x28cc070) Stream removed, broadcasting: 1\nI0921 11:28:46.398471 2996 log.go:181] (0x28cc000) (0x2950070) Stream removed, broadcasting: 3\nI0921 11:28:46.398651 2996 log.go:181] (0x28cc000) (0x2950230) Stream removed, broadcasting: 5\n" Sep 21 11:28:46.411: INFO: stdout: "\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q\naffinity-nodeport-transition-grw8q" Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Received response from host: affinity-nodeport-transition-grw8q Sep 21 11:28:46.412: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7374, will wait for the garbage collector to delete the pods Sep 21 11:28:46.497: INFO: Deleting ReplicationController affinity-nodeport-transition took: 9.160982ms Sep 21 11:28:46.997: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.780148ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:28:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7374" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.775 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":200,"skipped":3301,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:28:53.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5874 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 21 11:28:53.548: INFO: Found 0 stateful pods, waiting for 3 Sep 21 11:29:03.665: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:29:03.666: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:29:03.666: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 21 11:29:13.560: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:29:13.560: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:29:13.561: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 21 11:29:13.608: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 21 11:29:23.685: INFO: Updating stateful set ss2 Sep 21 11:29:23.716: INFO: Waiting for Pod statefulset-5874/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 21 11:29:33.896: INFO: Found 2 stateful pods, waiting for 3 Sep 21 11:29:43.911: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:29:43.911: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:29:43.911: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 21 11:29:43.949: INFO: Updating stateful set ss2 Sep 21 11:29:43.977: INFO: Waiting for Pod statefulset-5874/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 21 11:29:54.037: INFO: Updating stateful set ss2 Sep 21 11:29:54.155: INFO: Waiting for StatefulSet statefulset-5874/ss2 to complete update Sep 21 11:29:54.155: INFO: Waiting for Pod statefulset-5874/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 21 11:30:04.173: INFO: Waiting for StatefulSet statefulset-5874/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 21 11:30:14.316: INFO: Deleting all statefulset in ns statefulset-5874 Sep 21 11:30:14.325: INFO: Scaling statefulset ss2 to 0 Sep 21 11:30:44.360: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 11:30:44.366: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:30:44.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5874" for this suite. • [SLOW TEST:110.997 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":201,"skipped":3308,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:30:44.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:30:44.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc" in namespace "projected-3507" to be "Succeeded or Failed" Sep 21 11:30:44.531: INFO: Pod "downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.003446ms Sep 21 11:30:46.539: INFO: Pod "downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036144491s Sep 21 11:30:48.546: INFO: Pod "downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043784152s STEP: Saw pod success Sep 21 11:30:48.547: INFO: Pod "downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc" satisfied condition "Succeeded or Failed" Sep 21 11:30:48.551: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc container client-container: STEP: delete the pod Sep 21 11:30:48.602: INFO: Waiting for pod downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc to disappear Sep 21 11:30:48.613: INFO: Pod downwardapi-volume-d63c9dd1-47a7-4107-91c8-52d8e7a94efc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:30:48.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3507" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:30:48.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:31:00.429: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:31:02.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284660, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284660, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284660, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284660, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:31:05.487: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:31:06.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8232" for this suite. STEP: Destroying namespace "webhook-8232-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.602 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":203,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:31:06.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:31:06.446: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 21 11:31:06.464: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:06.499: INFO: Number of nodes with available pods: 0 Sep 21 11:31:06.499: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:31:07.511: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:07.518: INFO: Number of nodes with available pods: 0 Sep 21 11:31:07.518: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:31:08.510: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:08.517: INFO: Number of nodes with available pods: 0 Sep 21 11:31:08.517: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:31:09.603: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:09.609: INFO: Number of nodes with available pods: 0 Sep 21 11:31:09.609: INFO: Node kali-worker is running more than one daemon pod Sep 21 11:31:10.512: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:10.519: INFO: Number of nodes with available pods: 2 Sep 21 11:31:10.519: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 21 11:31:10.589: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:10.589: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:10.625: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:11.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:11.634: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:11.644: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:12.633: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:12.633: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:12.642: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:13.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:13.634: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:13.634: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:13.642: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:14.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:14.634: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:14.634: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:14.642: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:15.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:15.634: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:15.635: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:15.644: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:16.635: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:16.636: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:16.636: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:16.647: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:17.633: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:17.633: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:17.633: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:17.690: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:18.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:18.635: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:18.635: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:18.646: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:19.636: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:19.636: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:19.636: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:19.645: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:20.633: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:20.633: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:20.633: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:20.645: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:21.635: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:21.635: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:21.635: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:21.645: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:22.635: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:22.635: INFO: Wrong image for pod: daemon-set-sxk9q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:22.635: INFO: Pod daemon-set-sxk9q is not available Sep 21 11:31:22.646: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:23.635: INFO: Pod daemon-set-845fc is not available Sep 21 11:31:23.635: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:23.645: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:24.642: INFO: Pod daemon-set-845fc is not available Sep 21 11:31:24.643: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:24.673: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:25.634: INFO: Pod daemon-set-845fc is not available Sep 21 11:31:25.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:25.644: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:26.735: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:26.805: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:27.633: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:27.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:28.635: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:28.635: INFO: Pod daemon-set-rslbk is not available Sep 21 11:31:28.647: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:29.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:29.634: INFO: Pod daemon-set-rslbk is not available Sep 21 11:31:29.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:30.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:30.635: INFO: Pod daemon-set-rslbk is not available Sep 21 11:31:30.645: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:31.636: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:31.636: INFO: Pod daemon-set-rslbk is not available Sep 21 11:31:31.646: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:32.634: INFO: Wrong image for pod: daemon-set-rslbk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 21 11:31:32.634: INFO: Pod daemon-set-rslbk is not available Sep 21 11:31:32.650: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:33.635: INFO: Pod daemon-set-rmwgf is not available Sep 21 11:31:33.645: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 21 11:31:33.656: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:33.663: INFO: Number of nodes with available pods: 1 Sep 21 11:31:33.663: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:31:34.675: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:34.761: INFO: Number of nodes with available pods: 1 Sep 21 11:31:34.762: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:31:35.686: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:35.693: INFO: Number of nodes with available pods: 1 Sep 21 11:31:35.693: INFO: Node kali-worker2 is running more than one daemon pod Sep 21 11:31:36.676: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 21 11:31:36.694: INFO: Number of nodes with available pods: 2 Sep 21 11:31:36.694: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1132, will wait for the garbage collector to delete the pods Sep 21 11:31:36.790: INFO: Deleting DaemonSet.extensions daemon-set took: 8.518286ms Sep 21 11:31:37.291: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.906474ms Sep 21 11:31:43.297: INFO: Number of nodes with available pods: 0 Sep 21 11:31:43.297: INFO: Number of running nodes: 0, number of available pods: 0 Sep 21 11:31:43.303: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1132/daemonsets","resourceVersion":"2069802"},"items":null} Sep 21 11:31:43.307: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1132/pods","resourceVersion":"2069802"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:31:43.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1132" for this suite. • [SLOW TEST:37.127 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":204,"skipped":3364,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:31:43.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-c4e7d24d-c557-4cd4-a9a9-1c026f341161 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:31:43.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4462" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":205,"skipped":3366,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:31:43.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:31:47.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-794" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:31:47.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:31:47.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 21 11:31:48.436: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-21T11:31:48Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-21T11:31:48Z]] name:name1 resourceVersion:2069846 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6f652773-ae84-41f6-900b-36599e0455e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 21 11:31:58.448: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-21T11:31:58Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-21T11:31:58Z]] name:name2 resourceVersion:2069920 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b52f756c-f4a8-46d4-89a2-3986ebd496f0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 21 11:32:08.463: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-21T11:31:48Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-21T11:32:08Z]] name:name1 resourceVersion:2069950 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6f652773-ae84-41f6-900b-36599e0455e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 21 11:32:18.477: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-21T11:31:58Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-21T11:32:18Z]] name:name2 resourceVersion:2069980 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b52f756c-f4a8-46d4-89a2-3986ebd496f0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 21 11:32:28.488: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-21T11:31:48Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-21T11:32:08Z]] name:name1 resourceVersion:2070013 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6f652773-ae84-41f6-900b-36599e0455e5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 21 11:32:38.502: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-21T11:31:58Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-21T11:32:18Z]] name:name2 resourceVersion:2070045 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b52f756c-f4a8-46d4-89a2-3986ebd496f0] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:32:49.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8303" for this suite. • [SLOW TEST:61.299 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":207,"skipped":3403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:32:49.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6077 STEP: creating service affinity-nodeport in namespace services-6077 STEP: creating replication controller affinity-nodeport in namespace services-6077 I0921 11:32:49.246646 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6077, replica count: 3 I0921 11:32:52.298297 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:32:55.299167 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:32:58.300028 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 21 11:32:58.327: INFO: Creating new exec pod Sep 21 11:33:03.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-6077 execpod-affinityp5lzb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 21 11:33:04.865: INFO: stderr: "I0921 11:33:04.736043 3016 log.go:181] (0x2c983f0) (0x2c985b0) Create stream\nI0921 11:33:04.738999 3016 log.go:181] (0x2c983f0) (0x2c985b0) Stream added, broadcasting: 1\nI0921 11:33:04.750112 3016 log.go:181] (0x2c983f0) Reply frame received for 1\nI0921 11:33:04.751162 3016 log.go:181] (0x2c983f0) (0x2c98770) Create stream\nI0921 11:33:04.751299 3016 log.go:181] (0x2c983f0) (0x2c98770) Stream added, broadcasting: 3\nI0921 11:33:04.753396 3016 log.go:181] (0x2c983f0) Reply frame received for 3\nI0921 11:33:04.753577 3016 log.go:181] (0x2c983f0) (0x26d45b0) Create stream\nI0921 11:33:04.753628 3016 log.go:181] (0x2c983f0) (0x26d45b0) Stream added, broadcasting: 5\nI0921 11:33:04.754848 3016 log.go:181] (0x2c983f0) Reply frame received for 5\nI0921 11:33:04.846789 3016 log.go:181] (0x2c983f0) Data frame received for 3\nI0921 11:33:04.847342 3016 log.go:181] (0x2c983f0) Data frame received for 5\nI0921 11:33:04.847519 3016 log.go:181] (0x2c98770) (3) Data frame handling\nI0921 11:33:04.847776 3016 log.go:181] (0x2c983f0) Data frame received for 1\nI0921 11:33:04.847947 3016 log.go:181] (0x2c985b0) (1) Data frame handling\nI0921 11:33:04.848078 3016 log.go:181] (0x26d45b0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0921 11:33:04.851243 3016 log.go:181] (0x26d45b0) (5) Data frame sent\nI0921 11:33:04.851531 3016 log.go:181] (0x2c983f0) Data frame received for 5\nI0921 11:33:04.851721 3016 log.go:181] (0x26d45b0) (5) Data frame handling\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0921 11:33:04.851981 3016 log.go:181] (0x2c985b0) (1) Data frame sent\nI0921 11:33:04.852368 3016 log.go:181] (0x26d45b0) (5) Data frame sent\nI0921 11:33:04.852498 3016 log.go:181] (0x2c983f0) Data frame received for 5\nI0921 11:33:04.853740 3016 log.go:181] (0x2c983f0) (0x2c985b0) Stream removed, broadcasting: 1\nI0921 11:33:04.854832 3016 log.go:181] (0x26d45b0) (5) Data frame handling\nI0921 11:33:04.855182 3016 log.go:181] (0x2c983f0) Go away received\nI0921 11:33:04.857510 3016 log.go:181] (0x2c983f0) (0x2c985b0) Stream removed, broadcasting: 1\nI0921 11:33:04.857897 3016 log.go:181] (0x2c983f0) (0x2c98770) Stream removed, broadcasting: 3\nI0921 11:33:04.858034 3016 log.go:181] (0x2c983f0) (0x26d45b0) Stream removed, broadcasting: 5\n" Sep 21 11:33:04.866: INFO: stdout: "" Sep 21 11:33:04.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-6077 execpod-affinityp5lzb -- /bin/sh -x -c nc -zv -t -w 2 10.97.48.252 80' Sep 21 11:33:06.452: INFO: stderr: "I0921 11:33:06.317167 3037 log.go:181] (0x2b51ce0) (0x2b51d50) Create stream\nI0921 11:33:06.318952 3037 log.go:181] (0x2b51ce0) (0x2b51d50) Stream added, broadcasting: 1\nI0921 11:33:06.328816 3037 log.go:181] (0x2b51ce0) Reply frame received for 1\nI0921 11:33:06.329694 3037 log.go:181] (0x2b51ce0) (0x279c2a0) Create stream\nI0921 11:33:06.329810 3037 log.go:181] (0x2b51ce0) (0x279c2a0) Stream added, broadcasting: 3\nI0921 11:33:06.331749 3037 log.go:181] (0x2b51ce0) Reply frame received for 3\nI0921 11:33:06.332086 3037 log.go:181] (0x2b51ce0) (0x26124d0) Create stream\nI0921 11:33:06.332216 3037 log.go:181] (0x2b51ce0) (0x26124d0) Stream added, broadcasting: 5\nI0921 11:33:06.333511 3037 log.go:181] (0x2b51ce0) Reply frame received for 5\nI0921 11:33:06.419479 3037 log.go:181] (0x2b51ce0) Data frame received for 3\nI0921 11:33:06.419886 3037 log.go:181] (0x2b51ce0) Data frame received for 5\nI0921 11:33:06.420079 3037 log.go:181] (0x2b51ce0) Data frame received for 1\nI0921 11:33:06.420239 3037 log.go:181] (0x2b51d50) (1) Data frame handling\nI0921 11:33:06.420335 3037 log.go:181] (0x279c2a0) (3) Data frame handling\nI0921 11:33:06.420684 3037 log.go:181] (0x26124d0) (5) Data frame handling\nI0921 11:33:06.425215 3037 log.go:181] (0x2b51d50) (1) Data frame sent\nI0921 11:33:06.426703 3037 log.go:181] (0x2b51ce0) (0x2b51d50) Stream removed, broadcasting: 1\nI0921 11:33:06.428603 3037 log.go:181] (0x26124d0) (5) Data frame sent\nI0921 11:33:06.429231 3037 log.go:181] (0x2b51ce0) Data frame received for 5\nI0921 11:33:06.429571 3037 log.go:181] (0x26124d0) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.48.252 80\nConnection to 10.97.48.252 80 port [tcp/http] succeeded!\nI0921 11:33:06.439733 3037 log.go:181] (0x2b51ce0) (0x2b51d50) Stream removed, broadcasting: 1\nI0921 11:33:06.440095 3037 log.go:181] (0x2b51ce0) (0x279c2a0) Stream removed, broadcasting: 3\nI0921 11:33:06.441834 3037 log.go:181] (0x2b51ce0) Go away received\nI0921 11:33:06.443822 3037 log.go:181] (0x2b51ce0) (0x26124d0) Stream removed, broadcasting: 5\n" Sep 21 11:33:06.452: INFO: stdout: "" Sep 21 11:33:06.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-6077 execpod-affinityp5lzb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31475' Sep 21 11:33:07.990: INFO: stderr: "I0921 11:33:07.880109 3057 log.go:181] (0x2a520e0) (0x2a52150) Create stream\nI0921 11:33:07.883303 3057 log.go:181] (0x2a520e0) (0x2a52150) Stream added, broadcasting: 1\nI0921 11:33:07.895373 3057 log.go:181] (0x2a520e0) Reply frame received for 1\nI0921 11:33:07.896470 3057 log.go:181] (0x2a520e0) (0x2b55f10) Create stream\nI0921 11:33:07.896615 3057 log.go:181] (0x2a520e0) (0x2b55f10) Stream added, broadcasting: 3\nI0921 11:33:07.898417 3057 log.go:181] (0x2a520e0) Reply frame received for 3\nI0921 11:33:07.898932 3057 log.go:181] (0x2a520e0) (0x286e460) Create stream\nI0921 11:33:07.899027 3057 log.go:181] (0x2a520e0) (0x286e460) Stream added, broadcasting: 5\nI0921 11:33:07.900733 3057 log.go:181] (0x2a520e0) Reply frame received for 5\nI0921 11:33:07.972385 3057 log.go:181] (0x2a520e0) Data frame received for 5\nI0921 11:33:07.972601 3057 log.go:181] (0x286e460) (5) Data frame handling\nI0921 11:33:07.972842 3057 log.go:181] (0x2a520e0) Data frame received for 1\nI0921 11:33:07.972963 3057 log.go:181] (0x2a52150) (1) Data frame handling\nI0921 11:33:07.973166 3057 log.go:181] (0x2a520e0) Data frame received for 3\nI0921 11:33:07.973365 3057 log.go:181] (0x2b55f10) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31475\nConnection to 172.18.0.11 31475 port [tcp/31475] succeeded!\nI0921 11:33:07.973706 3057 log.go:181] (0x2a52150) (1) Data frame sent\nI0921 11:33:07.974422 3057 log.go:181] (0x286e460) (5) Data frame sent\nI0921 11:33:07.974871 3057 log.go:181] (0x2a520e0) Data frame received for 5\nI0921 11:33:07.974945 3057 log.go:181] (0x286e460) (5) Data frame handling\nI0921 11:33:07.976620 3057 log.go:181] (0x2a520e0) (0x2a52150) Stream removed, broadcasting: 1\nI0921 11:33:07.979476 3057 log.go:181] (0x2a520e0) Go away received\nI0921 11:33:07.981486 3057 log.go:181] (0x2a520e0) (0x2a52150) Stream removed, broadcasting: 1\nI0921 11:33:07.981656 3057 log.go:181] (0x2a520e0) (0x2b55f10) Stream removed, broadcasting: 3\nI0921 11:33:07.981789 3057 log.go:181] (0x2a520e0) (0x286e460) Stream removed, broadcasting: 5\n" Sep 21 11:33:07.991: INFO: stdout: "" Sep 21 11:33:07.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-6077 execpod-affinityp5lzb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31475' Sep 21 11:33:09.513: INFO: stderr: "I0921 11:33:09.368582 3078 log.go:181] (0x308f810) (0x308f880) Create stream\nI0921 11:33:09.370427 3078 log.go:181] (0x308f810) (0x308f880) Stream added, broadcasting: 1\nI0921 11:33:09.402169 3078 log.go:181] (0x308f810) Reply frame received for 1\nI0921 11:33:09.402686 3078 log.go:181] (0x308f810) (0x29844d0) Create stream\nI0921 11:33:09.402761 3078 log.go:181] (0x308f810) (0x29844d0) Stream added, broadcasting: 3\nI0921 11:33:09.404426 3078 log.go:181] (0x308f810) Reply frame received for 3\nI0921 11:33:09.404758 3078 log.go:181] (0x308f810) (0x2506ee0) Create stream\nI0921 11:33:09.404860 3078 log.go:181] (0x308f810) (0x2506ee0) Stream added, broadcasting: 5\nI0921 11:33:09.406231 3078 log.go:181] (0x308f810) Reply frame received for 5\nI0921 11:33:09.494861 3078 log.go:181] (0x308f810) Data frame received for 5\nI0921 11:33:09.495207 3078 log.go:181] (0x308f810) Data frame received for 3\nI0921 11:33:09.495337 3078 log.go:181] (0x308f810) Data frame received for 1\nI0921 11:33:09.495455 3078 log.go:181] (0x308f880) (1) Data frame handling\nI0921 11:33:09.495560 3078 log.go:181] (0x29844d0) (3) Data frame handling\nI0921 11:33:09.495824 3078 log.go:181] (0x2506ee0) (5) Data frame handling\nI0921 11:33:09.497118 3078 log.go:181] (0x308f880) (1) Data frame sent\nI0921 11:33:09.497467 3078 log.go:181] (0x2506ee0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31475\nConnection to 172.18.0.12 31475 port [tcp/31475] succeeded!\nI0921 11:33:09.497828 3078 log.go:181] (0x308f810) Data frame received for 5\nI0921 11:33:09.497979 3078 log.go:181] (0x2506ee0) (5) Data frame handling\nI0921 11:33:09.499221 3078 log.go:181] (0x308f810) (0x308f880) Stream removed, broadcasting: 1\nI0921 11:33:09.500973 3078 log.go:181] (0x308f810) Go away received\nI0921 11:33:09.504523 3078 log.go:181] (0x308f810) (0x308f880) Stream removed, broadcasting: 1\nI0921 11:33:09.504715 3078 log.go:181] (0x308f810) (0x29844d0) Stream removed, broadcasting: 3\nI0921 11:33:09.504864 3078 log.go:181] (0x308f810) (0x2506ee0) Stream removed, broadcasting: 5\n" Sep 21 11:33:09.514: INFO: stdout: "" Sep 21 11:33:09.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-6077 execpod-affinityp5lzb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31475/ ; done' Sep 21 11:33:11.074: INFO: stderr: "I0921 11:33:10.877087 3098 log.go:181] (0x2a54000) (0x2a54070) Create stream\nI0921 11:33:10.883890 3098 log.go:181] (0x2a54000) (0x2a54070) Stream added, broadcasting: 1\nI0921 11:33:10.893199 3098 log.go:181] (0x2a54000) Reply frame received for 1\nI0921 11:33:10.893700 3098 log.go:181] (0x2a54000) (0x2b31d50) Create stream\nI0921 11:33:10.893768 3098 log.go:181] (0x2a54000) (0x2b31d50) Stream added, broadcasting: 3\nI0921 11:33:10.895140 3098 log.go:181] (0x2a54000) Reply frame received for 3\nI0921 11:33:10.895418 3098 log.go:181] (0x2a54000) (0x2a54230) Create stream\nI0921 11:33:10.895498 3098 log.go:181] (0x2a54000) (0x2a54230) Stream added, broadcasting: 5\nI0921 11:33:10.896915 3098 log.go:181] (0x2a54000) Reply frame received for 5\nI0921 11:33:10.946997 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.947307 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:10.947429 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.947586 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.947666 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:10.947744 3098 log.go:181] (0x2b31d50) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.954119 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.954296 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.954457 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.954659 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.954743 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:10.954835 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.954910 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.954980 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.955068 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.962313 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.962442 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.962570 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.962884 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.962980 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.963142 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.963349 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.963485 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:10.963619 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.970277 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.970423 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.970568 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.970984 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.971171 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.971360 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.971524 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.971695 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:10.971833 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.977842 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.978003 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.978138 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.978260 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.978373 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.978599 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.978940 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:10.979286 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.979551 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.988587 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.988785 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.988905 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.989060 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.989195 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.989345 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.989493 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.989634 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:10.989828 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.992228 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.992335 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.992432 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.992796 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.992869 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:10.992965 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\n+ curl -qI0921 11:33:10.993288 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.993385 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.993544 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.993783 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:10.993977 3098 log.go:181] (0x2b31d50) (3) Data frame sent\n -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.994098 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:10.997625 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.997708 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.997802 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.998164 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:10.998275 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:10.998354 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:10.998454 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:10.998621 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:10.998743 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.005158 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.005299 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.005478 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.005890 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.005984 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:11.006093 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:11.006200 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.006283 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.006409 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.010921 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.011027 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.011142 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.011810 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.011982 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.012111 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.012286 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:11.012392 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.012551 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.015458 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.015544 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.015640 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.016314 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.016469 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:11.016571 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:11.016690 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.016777 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.016884 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.022982 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.023102 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.023252 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.023552 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.023731 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:11.023885 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\nI0921 11:33:11.024013 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.024204 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:11.024339 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.024518 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.024720 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.024875 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.030196 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.030372 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.030655 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.030762 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.030885 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:11.030980 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0921 11:33:11.031116 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.031259 3098 log.go:181] (0x2a54230) (5) Data frame handling\n 2 http://172.18.0.11:31475/\nI0921 11:33:11.031409 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.031609 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.031734 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.031902 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.036519 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.036633 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.036817 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.037499 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.037663 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/I0921 11:33:11.037821 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.037980 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.038110 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.038255 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.038386 3098 log.go:181] (0x2a54230) (5) Data frame handling\n\nI0921 11:33:11.038495 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.038640 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.043199 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.043346 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.043516 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.044058 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.044299 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.044472 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.044604 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.044715 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:11.044855 3098 log.go:181] (0x2a54230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:11.049411 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.049502 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.049601 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.050474 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.050613 3098 log.go:181] (0x2a54230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31475/\nI0921 11:33:11.050712 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.050836 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.050990 3098 log.go:181] (0x2a54230) (5) Data frame sent\nI0921 11:33:11.051108 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.054735 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.054889 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.055052 3098 log.go:181] (0x2b31d50) (3) Data frame sent\nI0921 11:33:11.055823 3098 log.go:181] (0x2a54000) Data frame received for 5\nI0921 11:33:11.055944 3098 log.go:181] (0x2a54230) (5) Data frame handling\nI0921 11:33:11.056631 3098 log.go:181] (0x2a54000) Data frame received for 3\nI0921 11:33:11.056816 3098 log.go:181] (0x2b31d50) (3) Data frame handling\nI0921 11:33:11.057838 3098 log.go:181] (0x2a54000) Data frame received for 1\nI0921 11:33:11.057975 3098 log.go:181] (0x2a54070) (1) Data frame handling\nI0921 11:33:11.058208 3098 log.go:181] (0x2a54070) (1) Data frame sent\nI0921 11:33:11.059095 3098 log.go:181] (0x2a54000) (0x2a54070) Stream removed, broadcasting: 1\nI0921 11:33:11.061838 3098 log.go:181] (0x2a54000) Go away received\nI0921 11:33:11.065442 3098 log.go:181] (0x2a54000) (0x2a54070) Stream removed, broadcasting: 1\nI0921 11:33:11.065778 3098 log.go:181] (0x2a54000) (0x2b31d50) Stream removed, broadcasting: 3\nI0921 11:33:11.066055 3098 log.go:181] (0x2a54000) (0x2a54230) Stream removed, broadcasting: 5\n" Sep 21 11:33:11.079: INFO: stdout: "\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld\naffinity-nodeport-4nsld" Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.079: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Received response from host: affinity-nodeport-4nsld Sep 21 11:33:11.080: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6077, will wait for the garbage collector to delete the pods Sep 21 11:33:11.206: INFO: Deleting ReplicationController affinity-nodeport took: 8.547447ms Sep 21 11:33:11.707: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.801416ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:33:23.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6077" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:34.356 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":208,"skipped":3449,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:33:23.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:33:27.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5536" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:33:27.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:33:27.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184" in namespace "projected-1947" to be "Succeeded or Failed" Sep 21 11:33:27.901: INFO: Pod "downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184": Phase="Pending", Reason="", readiness=false. Elapsed: 12.71222ms Sep 21 11:33:29.909: INFO: Pod "downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020755913s Sep 21 11:33:31.917: INFO: Pod "downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029292504s STEP: Saw pod success Sep 21 11:33:31.918: INFO: Pod "downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184" satisfied condition "Succeeded or Failed" Sep 21 11:33:31.925: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184 container client-container: STEP: delete the pod Sep 21 11:33:31.980: INFO: Waiting for pod downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184 to disappear Sep 21 11:33:31.985: INFO: Pod downwardapi-volume-f8a18395-0e92-47ad-836e-f6de2afbd184 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:33:31.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1947" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:33:32.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0921 11:33:42.156685 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 21 11:34:44.185: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:34:44.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8029" for this suite. • [SLOW TEST:72.201 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":211,"skipped":3497,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:34:44.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:34:44.258: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:34:45.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6672" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":212,"skipped":3509,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:34:45.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:35:45.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8097" for this suite. • [SLOW TEST:60.128 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":213,"skipped":3510,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:35:45.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:35:58.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:36:00.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284958, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284958, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284958, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736284958, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:36:03.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:36:04.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2286" for this suite. STEP: Destroying namespace "webhook-2286-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.715 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":214,"skipped":3511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:36:04.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:36:08.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9120" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":215,"skipped":3585,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:36:08.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 21 11:36:08.709: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 21 11:36:08.736: INFO: Waiting for terminating namespaces to be deleted... Sep 21 11:36:08.741: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 21 11:36:08.749: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 11:36:08.749: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 11:36:08.749: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 11:36:08.749: INFO: Container kube-proxy ready: true, restart count 0 Sep 21 11:36:08.749: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 21 11:36:08.757: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 11:36:08.757: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 11:36:08.757: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 11:36:08.757: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-579748c1-dbef-45e1-a1fb-e679134b68d1 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-579748c1-dbef-45e1-a1fb-e679134b68d1 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-579748c1-dbef-45e1-a1fb-e679134b68d1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:41:21.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5242" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:312.696 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":216,"skipped":3603,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:41:21.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:41:32.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8122" for this suite. • [SLOW TEST:11.192 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":217,"skipped":3606,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:41:32.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7167 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7167 STEP: creating replication controller externalsvc in namespace services-7167 I0921 11:41:32.750416 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7167, replica count: 2 I0921 11:41:35.801958 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:41:38.802801 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 21 11:41:38.918: INFO: Creating new exec pod Sep 21 11:41:42.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-7167 execpodxdt88 -- /bin/sh -x -c nslookup nodeport-service.services-7167.svc.cluster.local' Sep 21 11:41:49.036: INFO: stderr: "I0921 11:41:48.910698 3118 log.go:181] (0x2d97a40) (0x2d97ab0) Create stream\nI0921 11:41:48.914389 3118 log.go:181] (0x2d97a40) (0x2d97ab0) Stream added, broadcasting: 1\nI0921 11:41:48.930329 3118 log.go:181] (0x2d97a40) Reply frame received for 1\nI0921 11:41:48.931236 3118 log.go:181] (0x2d97a40) (0x2d97c70) Create stream\nI0921 11:41:48.931346 3118 log.go:181] (0x2d97a40) (0x2d97c70) Stream added, broadcasting: 3\nI0921 11:41:48.933684 3118 log.go:181] (0x2d97a40) Reply frame received for 3\nI0921 11:41:48.934172 3118 log.go:181] (0x2d97a40) (0x2ad0070) Create stream\nI0921 11:41:48.934341 3118 log.go:181] (0x2d97a40) (0x2ad0070) Stream added, broadcasting: 5\nI0921 11:41:48.936235 3118 log.go:181] (0x2d97a40) Reply frame received for 5\nI0921 11:41:49.016661 3118 log.go:181] (0x2d97a40) Data frame received for 5\nI0921 11:41:49.016982 3118 log.go:181] (0x2ad0070) (5) Data frame handling\nI0921 11:41:49.017429 3118 log.go:181] (0x2ad0070) (5) Data frame sent\n+ nslookup nodeport-service.services-7167.svc.cluster.local\nI0921 11:41:49.024439 3118 log.go:181] (0x2d97a40) Data frame received for 3\nI0921 11:41:49.024538 3118 log.go:181] (0x2d97c70) (3) Data frame handling\nI0921 11:41:49.024635 3118 log.go:181] (0x2d97c70) (3) Data frame sent\nI0921 11:41:49.025221 3118 log.go:181] (0x2d97a40) Data frame received for 5\nI0921 11:41:49.025323 3118 log.go:181] (0x2ad0070) (5) Data frame handling\nI0921 11:41:49.025454 3118 log.go:181] (0x2d97a40) Data frame received for 3\nI0921 11:41:49.025579 3118 log.go:181] (0x2d97c70) (3) Data frame handling\nI0921 11:41:49.025716 3118 log.go:181] (0x2d97c70) (3) Data frame sent\nI0921 11:41:49.025818 3118 log.go:181] (0x2d97a40) Data frame received for 3\nI0921 11:41:49.025887 3118 log.go:181] (0x2d97c70) (3) Data frame handling\nI0921 11:41:49.026032 3118 log.go:181] (0x2d97a40) Data frame received for 1\nI0921 11:41:49.026156 3118 log.go:181] (0x2d97ab0) (1) Data frame handling\nI0921 11:41:49.026298 3118 log.go:181] (0x2d97ab0) (1) Data frame sent\nI0921 11:41:49.026930 3118 log.go:181] (0x2d97a40) (0x2d97ab0) Stream removed, broadcasting: 1\nI0921 11:41:49.028364 3118 log.go:181] (0x2d97a40) Go away received\nI0921 11:41:49.030300 3118 log.go:181] (0x2d97a40) (0x2d97ab0) Stream removed, broadcasting: 1\nI0921 11:41:49.030482 3118 log.go:181] (0x2d97a40) (0x2d97c70) Stream removed, broadcasting: 3\nI0921 11:41:49.030618 3118 log.go:181] (0x2d97a40) (0x2ad0070) Stream removed, broadcasting: 5\n" Sep 21 11:41:49.037: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7167.svc.cluster.local\tcanonical name = externalsvc.services-7167.svc.cluster.local.\nName:\texternalsvc.services-7167.svc.cluster.local\nAddress: 10.107.185.18\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7167, will wait for the garbage collector to delete the pods Sep 21 11:41:49.102: INFO: Deleting ReplicationController externalsvc took: 8.96772ms Sep 21 11:41:49.502: INFO: Terminating ReplicationController externalsvc pods took: 400.887106ms Sep 21 11:42:03.343: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:42:03.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7167" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.884 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":218,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:42:03.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:42:21.027: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:42:23.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285341, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285341, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285341, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285340, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:42:26.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:42:26.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5224-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:42:27.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7475" for this suite. STEP: Destroying namespace "webhook-7475-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.952 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":219,"skipped":3630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:42:27.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-4eae9da1-eb3b-46b4-a224-14ef8b5ade60 STEP: Creating a pod to test consume configMaps Sep 21 11:42:27.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a" in namespace "configmap-2555" to be "Succeeded or Failed" Sep 21 11:42:27.536: INFO: Pod "pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.749546ms Sep 21 11:42:29.543: INFO: Pod "pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023154608s Sep 21 11:42:31.591: INFO: Pod "pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071152293s STEP: Saw pod success Sep 21 11:42:31.591: INFO: Pod "pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a" satisfied condition "Succeeded or Failed" Sep 21 11:42:31.597: INFO: Trying to get logs from node kali-worker pod pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a container configmap-volume-test: STEP: delete the pod Sep 21 11:42:31.707: INFO: Waiting for pod pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a to disappear Sep 21 11:42:31.739: INFO: Pod pod-configmaps-a45e9195-585a-4281-904e-6b2834ca3d4a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:42:31.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2555" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3657,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:42:31.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 21 11:42:31.896: INFO: Waiting up to 5m0s for pod "pod-2f2f9760-10f3-4170-91ad-3aca203ab727" in namespace "emptydir-3734" to be "Succeeded or Failed" Sep 21 11:42:31.902: INFO: Pod "pod-2f2f9760-10f3-4170-91ad-3aca203ab727": Phase="Pending", Reason="", readiness=false. Elapsed: 5.849374ms Sep 21 11:42:33.910: INFO: Pod "pod-2f2f9760-10f3-4170-91ad-3aca203ab727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013810099s Sep 21 11:42:35.918: INFO: Pod "pod-2f2f9760-10f3-4170-91ad-3aca203ab727": Phase="Running", Reason="", readiness=true. Elapsed: 4.021615506s Sep 21 11:42:37.926: INFO: Pod "pod-2f2f9760-10f3-4170-91ad-3aca203ab727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029775513s STEP: Saw pod success Sep 21 11:42:37.926: INFO: Pod "pod-2f2f9760-10f3-4170-91ad-3aca203ab727" satisfied condition "Succeeded or Failed" Sep 21 11:42:37.931: INFO: Trying to get logs from node kali-worker pod pod-2f2f9760-10f3-4170-91ad-3aca203ab727 container test-container: STEP: delete the pod Sep 21 11:42:37.951: INFO: Waiting for pod pod-2f2f9760-10f3-4170-91ad-3aca203ab727 to disappear Sep 21 11:42:37.968: INFO: Pod pod-2f2f9760-10f3-4170-91ad-3aca203ab727 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:42:37.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3734" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3667,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:42:37.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:42:54.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7041" for this suite. • [SLOW TEST:16.145 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":222,"skipped":3674,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:42:54.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 21 11:44:54.883: INFO: Successfully updated pod "var-expansion-62303bac-d854-4cd1-8139-4bfcda2883c2" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 21 11:44:56.982: INFO: Deleting pod "var-expansion-62303bac-d854-4cd1-8139-4bfcda2883c2" in namespace "var-expansion-7831" Sep 21 11:44:56.997: INFO: Wait up to 5m0s for pod "var-expansion-62303bac-d854-4cd1-8139-4bfcda2883c2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:45:31.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7831" for this suite. • [SLOW TEST:156.911 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":223,"skipped":3696,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:45:31.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 21 11:45:35.744: INFO: Successfully updated pod "pod-update-05d56bd4-094d-42eb-b762-b48a189f6d78" STEP: verifying the updated pod is in kubernetes Sep 21 11:45:35.754: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:45:35.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6183" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3715,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:45:35.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3229 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 21 11:45:35.872: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 21 11:45:36.017: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 11:45:38.024: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 11:45:40.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 11:45:42.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 11:45:44.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 11:45:46.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 11:45:48.024: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 11:45:50.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 11:45:52.032: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 21 11:45:52.040: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 21 11:45:54.047: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 21 11:45:56.053: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 21 11:46:00.126: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.204:8080/dial?request=hostname&protocol=http&host=10.244.1.203&port=8080&tries=1'] Namespace:pod-network-test-3229 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 11:46:00.126: INFO: >>> kubeConfig: /root/.kube/config I0921 11:46:00.254126 10 log.go:181] (0xa00abd0) (0xa00ac40) Create stream I0921 11:46:00.254376 10 log.go:181] (0xa00abd0) (0xa00ac40) Stream added, broadcasting: 1 I0921 11:46:00.260701 10 log.go:181] (0xa00abd0) Reply frame received for 1 I0921 11:46:00.260946 10 log.go:181] (0xa00abd0) (0xa00ae00) Create stream I0921 11:46:00.261060 10 log.go:181] (0xa00abd0) (0xa00ae00) Stream added, broadcasting: 3 I0921 11:46:00.262888 10 log.go:181] (0xa00abd0) Reply frame received for 3 I0921 11:46:00.263113 10 log.go:181] (0xa00abd0) (0x8984af0) Create stream I0921 11:46:00.263232 10 log.go:181] (0xa00abd0) (0x8984af0) Stream added, broadcasting: 5 I0921 11:46:00.264834 10 log.go:181] (0xa00abd0) Reply frame received for 5 I0921 11:46:00.324461 10 log.go:181] (0xa00abd0) Data frame received for 3 I0921 11:46:00.324638 10 log.go:181] (0xa00ae00) (3) Data frame handling I0921 11:46:00.324813 10 log.go:181] (0xa00abd0) Data frame received for 5 I0921 11:46:00.324944 10 log.go:181] (0x8984af0) (5) Data frame handling I0921 11:46:00.325047 10 log.go:181] (0xa00ae00) (3) Data frame sent I0921 11:46:00.325179 10 log.go:181] (0xa00abd0) Data frame received for 3 I0921 11:46:00.325277 10 log.go:181] (0xa00ae00) (3) Data frame handling I0921 11:46:00.326452 10 log.go:181] (0xa00abd0) Data frame received for 1 I0921 11:46:00.326606 10 log.go:181] (0xa00ac40) (1) Data frame handling I0921 11:46:00.326788 10 log.go:181] (0xa00ac40) (1) Data frame sent I0921 11:46:00.326918 10 log.go:181] (0xa00abd0) (0xa00ac40) Stream removed, broadcasting: 1 I0921 11:46:00.327066 10 log.go:181] (0xa00abd0) Go away received I0921 11:46:00.327510 10 log.go:181] (0xa00abd0) (0xa00ac40) Stream removed, broadcasting: 1 I0921 11:46:00.327633 10 log.go:181] (0xa00abd0) (0xa00ae00) Stream removed, broadcasting: 3 I0921 11:46:00.327725 10 log.go:181] (0xa00abd0) (0x8984af0) Stream removed, broadcasting: 5 Sep 21 11:46:00.328: INFO: Waiting for responses: map[] Sep 21 11:46:00.334: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.204:8080/dial?request=hostname&protocol=http&host=10.244.2.233&port=8080&tries=1'] Namespace:pod-network-test-3229 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 11:46:00.334: INFO: >>> kubeConfig: /root/.kube/config I0921 11:46:00.448424 10 log.go:181] (0xab28f50) (0xab292d0) Create stream I0921 11:46:00.448613 10 log.go:181] (0xab28f50) (0xab292d0) Stream added, broadcasting: 1 I0921 11:46:00.453399 10 log.go:181] (0xab28f50) Reply frame received for 1 I0921 11:46:00.453587 10 log.go:181] (0xab28f50) (0xa6632d0) Create stream I0921 11:46:00.453673 10 log.go:181] (0xab28f50) (0xa6632d0) Stream added, broadcasting: 3 I0921 11:46:00.455031 10 log.go:181] (0xab28f50) Reply frame received for 3 I0921 11:46:00.455264 10 log.go:181] (0xab28f50) (0xab29d50) Create stream I0921 11:46:00.455374 10 log.go:181] (0xab28f50) (0xab29d50) Stream added, broadcasting: 5 I0921 11:46:00.456847 10 log.go:181] (0xab28f50) Reply frame received for 5 I0921 11:46:00.513574 10 log.go:181] (0xab28f50) Data frame received for 3 I0921 11:46:00.513729 10 log.go:181] (0xa6632d0) (3) Data frame handling I0921 11:46:00.513873 10 log.go:181] (0xa6632d0) (3) Data frame sent I0921 11:46:00.513971 10 log.go:181] (0xab28f50) Data frame received for 3 I0921 11:46:00.514074 10 log.go:181] (0xa6632d0) (3) Data frame handling I0921 11:46:00.514333 10 log.go:181] (0xab28f50) Data frame received for 5 I0921 11:46:00.514475 10 log.go:181] (0xab29d50) (5) Data frame handling I0921 11:46:00.515779 10 log.go:181] (0xab28f50) Data frame received for 1 I0921 11:46:00.515890 10 log.go:181] (0xab292d0) (1) Data frame handling I0921 11:46:00.516014 10 log.go:181] (0xab292d0) (1) Data frame sent I0921 11:46:00.516323 10 log.go:181] (0xab28f50) (0xab292d0) Stream removed, broadcasting: 1 I0921 11:46:00.516511 10 log.go:181] (0xab28f50) Go away received I0921 11:46:00.516892 10 log.go:181] (0xab28f50) (0xab292d0) Stream removed, broadcasting: 1 I0921 11:46:00.517100 10 log.go:181] (0xab28f50) (0xa6632d0) Stream removed, broadcasting: 3 I0921 11:46:00.517237 10 log.go:181] (0xab28f50) (0xab29d50) Stream removed, broadcasting: 5 Sep 21 11:46:00.517: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:00.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3229" for this suite. • [SLOW TEST:24.761 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":225,"skipped":3726,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:00.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:04.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9967" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":226,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:04.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-639 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-639 STEP: Creating statefulset with conflicting port in namespace statefulset-639 STEP: Waiting until pod test-pod will start running in namespace statefulset-639 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-639 Sep 21 11:46:10.889: INFO: Observed stateful pod in namespace: statefulset-639, name: ss-0, uid: 44cd0e79-a3d9-4cd1-ae89-f31f0df8b37a, status phase: Pending. Waiting for statefulset controller to delete. Sep 21 11:46:11.310: INFO: Observed stateful pod in namespace: statefulset-639, name: ss-0, uid: 44cd0e79-a3d9-4cd1-ae89-f31f0df8b37a, status phase: Failed. Waiting for statefulset controller to delete. Sep 21 11:46:11.339: INFO: Observed stateful pod in namespace: statefulset-639, name: ss-0, uid: 44cd0e79-a3d9-4cd1-ae89-f31f0df8b37a, status phase: Failed. Waiting for statefulset controller to delete. Sep 21 11:46:11.346: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-639 STEP: Removing pod with conflicting port in namespace statefulset-639 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-639 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 21 11:46:15.469: INFO: Deleting all statefulset in ns statefulset-639 Sep 21 11:46:15.474: INFO: Scaling statefulset ss to 0 Sep 21 11:46:25.502: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 11:46:25.508: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-639" for this suite. • [SLOW TEST:20.814 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":227,"skipped":3756,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:25.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 21 11:46:32.199: INFO: Successfully updated pod "adopt-release-7hk8w" STEP: Checking that the Job readopts the Pod Sep 21 11:46:32.200: INFO: Waiting up to 15m0s for pod "adopt-release-7hk8w" in namespace "job-4433" to be "adopted" Sep 21 11:46:32.206: INFO: Pod "adopt-release-7hk8w": Phase="Running", Reason="", readiness=true. Elapsed: 6.260416ms Sep 21 11:46:34.226: INFO: Pod "adopt-release-7hk8w": Phase="Running", Reason="", readiness=true. Elapsed: 2.02586951s Sep 21 11:46:34.226: INFO: Pod "adopt-release-7hk8w" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 21 11:46:34.745: INFO: Successfully updated pod "adopt-release-7hk8w" STEP: Checking that the Job releases the Pod Sep 21 11:46:34.746: INFO: Waiting up to 15m0s for pod "adopt-release-7hk8w" in namespace "job-4433" to be "released" Sep 21 11:46:34.941: INFO: Pod "adopt-release-7hk8w": Phase="Running", Reason="", readiness=true. Elapsed: 195.654791ms Sep 21 11:46:34.942: INFO: Pod "adopt-release-7hk8w" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4433" for this suite. • [SLOW TEST:9.488 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":228,"skipped":3763,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:35.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-5baabeb1-d375-475a-a495-bfb569e6a49f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:41.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7192" for this suite. • [SLOW TEST:6.271 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:41.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 21 11:46:41.412: INFO: Waiting up to 5m0s for pod "var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a" in namespace "var-expansion-8575" to be "Succeeded or Failed" Sep 21 11:46:41.418: INFO: Pod "var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.84763ms Sep 21 11:46:43.426: INFO: Pod "var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013921407s Sep 21 11:46:45.435: INFO: Pod "var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022623028s STEP: Saw pod success Sep 21 11:46:45.435: INFO: Pod "var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a" satisfied condition "Succeeded or Failed" Sep 21 11:46:45.441: INFO: Trying to get logs from node kali-worker pod var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a container dapi-container: STEP: delete the pod Sep 21 11:46:45.488: INFO: Waiting for pod var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a to disappear Sep 21 11:46:45.499: INFO: Pod var-expansion-a84a4230-f5ac-499e-a7b1-37c5ac23a57a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:45.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8575" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":230,"skipped":3802,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:45.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 21 11:46:49.937: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:50.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8199" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3803,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:50.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 21 11:46:50.345: INFO: Waiting up to 5m0s for pod "downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d" in namespace "downward-api-251" to be "Succeeded or Failed" Sep 21 11:46:50.371: INFO: Pod "downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.893836ms Sep 21 11:46:52.394: INFO: Pod "downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048079269s Sep 21 11:46:54.402: INFO: Pod "downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056313467s STEP: Saw pod success Sep 21 11:46:54.402: INFO: Pod "downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d" satisfied condition "Succeeded or Failed" Sep 21 11:46:54.408: INFO: Trying to get logs from node kali-worker pod downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d container dapi-container: STEP: delete the pod Sep 21 11:46:54.432: INFO: Waiting for pod downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d to disappear Sep 21 11:46:54.455: INFO: Pod downward-api-71287cdf-4430-4973-85fd-10f6a2e9ce9d no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:54.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-251" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":232,"skipped":3812,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:54.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 11:46:54.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d" in namespace "downward-api-8922" to be "Succeeded or Failed" Sep 21 11:46:54.557: INFO: Pod "downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.781256ms Sep 21 11:46:56.569: INFO: Pod "downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016605626s Sep 21 11:46:58.577: INFO: Pod "downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025424022s STEP: Saw pod success Sep 21 11:46:58.578: INFO: Pod "downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d" satisfied condition "Succeeded or Failed" Sep 21 11:46:58.584: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d container client-container: STEP: delete the pod Sep 21 11:46:58.625: INFO: Waiting for pod downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d to disappear Sep 21 11:46:58.638: INFO: Pod downwardapi-volume-d97a2375-b633-4620-b95b-bb87289dc35d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:46:58.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8922" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3814,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:46:58.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:46:58.760: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-392ec9da-5df2-4631-838c-a55d91fbb290" in namespace "security-context-test-2127" to be "Succeeded or Failed" Sep 21 11:46:58.807: INFO: Pod "busybox-readonly-false-392ec9da-5df2-4631-838c-a55d91fbb290": Phase="Pending", Reason="", readiness=false. Elapsed: 46.155334ms Sep 21 11:47:00.816: INFO: Pod "busybox-readonly-false-392ec9da-5df2-4631-838c-a55d91fbb290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055446898s Sep 21 11:47:02.822: INFO: Pod "busybox-readonly-false-392ec9da-5df2-4631-838c-a55d91fbb290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06157484s Sep 21 11:47:02.822: INFO: Pod "busybox-readonly-false-392ec9da-5df2-4631-838c-a55d91fbb290" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:47:02.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2127" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":3835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:47:02.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-86bx STEP: Creating a pod to test atomic-volume-subpath Sep 21 11:47:03.031: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-86bx" in namespace "subpath-8189" to be "Succeeded or Failed" Sep 21 11:47:03.099: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Pending", Reason="", readiness=false. Elapsed: 67.818014ms Sep 21 11:47:05.108: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076198568s Sep 21 11:47:07.116: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 4.084617888s Sep 21 11:47:09.124: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 6.092413888s Sep 21 11:47:11.132: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 8.100355666s Sep 21 11:47:13.141: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 10.109213073s Sep 21 11:47:15.149: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 12.117432264s Sep 21 11:47:17.155: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 14.123921364s Sep 21 11:47:19.162: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 16.13098756s Sep 21 11:47:21.171: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 18.139407269s Sep 21 11:47:23.179: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 20.147700724s Sep 21 11:47:25.188: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Running", Reason="", readiness=true. Elapsed: 22.156415993s Sep 21 11:47:27.196: INFO: Pod "pod-subpath-test-downwardapi-86bx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.164388511s STEP: Saw pod success Sep 21 11:47:27.196: INFO: Pod "pod-subpath-test-downwardapi-86bx" satisfied condition "Succeeded or Failed" Sep 21 11:47:27.201: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-86bx container test-container-subpath-downwardapi-86bx: STEP: delete the pod Sep 21 11:47:27.253: INFO: Waiting for pod pod-subpath-test-downwardapi-86bx to disappear Sep 21 11:47:27.262: INFO: Pod pod-subpath-test-downwardapi-86bx no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-86bx Sep 21 11:47:27.262: INFO: Deleting pod "pod-subpath-test-downwardapi-86bx" in namespace "subpath-8189" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:47:27.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8189" for this suite. • [SLOW TEST:24.432 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":235,"skipped":3877,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:47:27.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:47:40.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4294" for this suite. • [SLOW TEST:13.264 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":236,"skipped":3887,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:47:40.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 21 11:47:40.645: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 21 11:47:40.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113' Sep 21 11:47:43.000: INFO: stderr: "" Sep 21 11:47:43.000: INFO: stdout: "service/agnhost-replica created\n" Sep 21 11:47:43.002: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 21 11:47:43.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113' Sep 21 11:47:45.530: INFO: stderr: "" Sep 21 11:47:45.530: INFO: stdout: "service/agnhost-primary created\n" Sep 21 11:47:45.530: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 21 11:47:45.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113' Sep 21 11:47:48.411: INFO: stderr: "" Sep 21 11:47:48.411: INFO: stdout: "service/frontend created\n" Sep 21 11:47:48.412: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 21 11:47:48.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113' Sep 21 11:47:51.420: INFO: stderr: "" Sep 21 11:47:51.420: INFO: stdout: "deployment.apps/frontend created\n" Sep 21 11:47:51.422: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 21 11:47:51.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113' Sep 21 11:47:54.752: INFO: stderr: "" Sep 21 11:47:54.752: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 21 11:47:54.753: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 21 11:47:54.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113' Sep 21 11:47:58.323: INFO: stderr: "" Sep 21 11:47:58.323: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 21 11:47:58.323: INFO: Waiting for all frontend pods to be Running. Sep 21 11:47:58.375: INFO: Waiting for frontend to serve content. Sep 21 11:47:59.560: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Sep 21 11:48:04.605: INFO: Trying to add a new entry to the guestbook. Sep 21 11:48:04.641: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 21 11:48:04.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2113' Sep 21 11:48:05.914: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:48:05.914: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 21 11:48:05.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2113' Sep 21 11:48:07.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:48:07.278: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 21 11:48:07.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2113' Sep 21 11:48:08.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:48:08.561: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 21 11:48:08.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2113' Sep 21 11:48:09.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:48:09.750: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 21 11:48:09.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2113' Sep 21 11:48:11.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:48:11.118: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 21 11:48:11.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2113' Sep 21 11:48:12.323: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:48:12.323: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:48:12.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2113" for this suite. • [SLOW TEST:31.835 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":237,"skipped":3891,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:48:12.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 21 11:48:12.767: INFO: Waiting up to 5m0s for pod "pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b" in namespace "emptydir-6118" to be "Succeeded or Failed" Sep 21 11:48:12.833: INFO: Pod "pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.763134ms Sep 21 11:48:14.842: INFO: Pod "pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074568661s Sep 21 11:48:16.856: INFO: Pod "pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089236175s Sep 21 11:48:18.864: INFO: Pod "pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097390221s STEP: Saw pod success Sep 21 11:48:18.865: INFO: Pod "pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b" satisfied condition "Succeeded or Failed" Sep 21 11:48:18.870: INFO: Trying to get logs from node kali-worker pod pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b container test-container: STEP: delete the pod Sep 21 11:48:18.909: INFO: Waiting for pod pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b to disappear Sep 21 11:48:18.926: INFO: Pod pod-167f7989-e71e-479f-8adb-d01c7ed0eb2b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:48:18.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6118" for this suite. • [SLOW TEST:6.580 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":238,"skipped":3898,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:48:18.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:48:19.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8073" for this suite. STEP: Destroying namespace "nspatchtest-5877f0ac-02a4-448b-9113-4ae20c586825-1779" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":239,"skipped":3912,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:48:19.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-776dd705-682f-4cfc-bfdd-64f7df2a5841 in namespace container-probe-9926 Sep 21 11:48:23.377: INFO: Started pod liveness-776dd705-682f-4cfc-bfdd-64f7df2a5841 in namespace container-probe-9926 STEP: checking the pod's current state and verifying that restartCount is present Sep 21 11:48:23.381: INFO: Initial restart count of pod liveness-776dd705-682f-4cfc-bfdd-64f7df2a5841 is 0 Sep 21 11:48:41.507: INFO: Restart count of pod container-probe-9926/liveness-776dd705-682f-4cfc-bfdd-64f7df2a5841 is now 1 (18.125920492s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:48:41.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9926" for this suite. • [SLOW TEST:22.302 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3920,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:48:41.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 21 11:48:41.665: INFO: Waiting up to 5m0s for pod "downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79" in namespace "downward-api-18" to be "Succeeded or Failed" Sep 21 11:48:41.686: INFO: Pod "downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79": Phase="Pending", Reason="", readiness=false. Elapsed: 20.399588ms Sep 21 11:48:43.693: INFO: Pod "downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027897353s Sep 21 11:48:45.700: INFO: Pod "downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035207253s STEP: Saw pod success Sep 21 11:48:45.701: INFO: Pod "downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79" satisfied condition "Succeeded or Failed" Sep 21 11:48:45.705: INFO: Trying to get logs from node kali-worker pod downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79 container dapi-container: STEP: delete the pod Sep 21 11:48:45.773: INFO: Waiting for pod downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79 to disappear Sep 21 11:48:45.786: INFO: Pod downward-api-a32cfbce-a120-43d7-8cfb-16449d16ed79 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:48:45.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-18" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3929,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:48:45.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5774 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5774 STEP: creating replication controller externalsvc in namespace services-5774 I0921 11:48:46.029177 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5774, replica count: 2 I0921 11:48:49.081128 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0921 11:48:52.082009 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 21 11:48:52.113: INFO: Creating new exec pod Sep 21 11:48:56.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=services-5774 execpodsbtcb -- /bin/sh -x -c nslookup clusterip-service.services-5774.svc.cluster.local' Sep 21 11:48:57.670: INFO: stderr: "I0921 11:48:57.561130 3380 log.go:181] (0x2954000) (0x2954310) Create stream\nI0921 11:48:57.563435 3380 log.go:181] (0x2954000) (0x2954310) Stream added, broadcasting: 1\nI0921 11:48:57.574276 3380 log.go:181] (0x2954000) Reply frame received for 1\nI0921 11:48:57.575153 3380 log.go:181] (0x2954000) (0x2eaa070) Create stream\nI0921 11:48:57.575257 3380 log.go:181] (0x2954000) (0x2eaa070) Stream added, broadcasting: 3\nI0921 11:48:57.577228 3380 log.go:181] (0x2954000) Reply frame received for 3\nI0921 11:48:57.577437 3380 log.go:181] (0x2954000) (0x2954540) Create stream\nI0921 11:48:57.577491 3380 log.go:181] (0x2954000) (0x2954540) Stream added, broadcasting: 5\nI0921 11:48:57.578853 3380 log.go:181] (0x2954000) Reply frame received for 5\nI0921 11:48:57.640704 3380 log.go:181] (0x2954000) Data frame received for 5\nI0921 11:48:57.641032 3380 log.go:181] (0x2954540) (5) Data frame handling\nI0921 11:48:57.641687 3380 log.go:181] (0x2954540) (5) Data frame sent\n+ nslookup clusterip-service.services-5774.svc.cluster.local\nI0921 11:48:57.649221 3380 log.go:181] (0x2954000) Data frame received for 3\nI0921 11:48:57.649336 3380 log.go:181] (0x2eaa070) (3) Data frame handling\nI0921 11:48:57.649462 3380 log.go:181] (0x2eaa070) (3) Data frame sent\nI0921 11:48:57.650595 3380 log.go:181] (0x2954000) Data frame received for 3\nI0921 11:48:57.650690 3380 log.go:181] (0x2eaa070) (3) Data frame handling\nI0921 11:48:57.650814 3380 log.go:181] (0x2eaa070) (3) Data frame sent\nI0921 11:48:57.650896 3380 log.go:181] (0x2954000) Data frame received for 3\nI0921 11:48:57.650969 3380 log.go:181] (0x2eaa070) (3) Data frame handling\nI0921 11:48:57.651221 3380 log.go:181] (0x2954000) Data frame received for 5\nI0921 11:48:57.651437 3380 log.go:181] (0x2954540) (5) Data frame handling\nI0921 11:48:57.653224 3380 log.go:181] (0x2954000) Data frame received for 1\nI0921 11:48:57.653364 3380 log.go:181] (0x2954310) (1) Data frame handling\nI0921 11:48:57.653508 3380 log.go:181] (0x2954310) (1) Data frame sent\nI0921 11:48:57.654311 3380 log.go:181] (0x2954000) (0x2954310) Stream removed, broadcasting: 1\nI0921 11:48:57.656934 3380 log.go:181] (0x2954000) Go away received\nI0921 11:48:57.660885 3380 log.go:181] (0x2954000) (0x2954310) Stream removed, broadcasting: 1\nI0921 11:48:57.661153 3380 log.go:181] (0x2954000) (0x2eaa070) Stream removed, broadcasting: 3\nI0921 11:48:57.661379 3380 log.go:181] (0x2954000) (0x2954540) Stream removed, broadcasting: 5\n" Sep 21 11:48:57.671: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5774.svc.cluster.local\tcanonical name = externalsvc.services-5774.svc.cluster.local.\nName:\texternalsvc.services-5774.svc.cluster.local\nAddress: 10.96.24.112\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5774, will wait for the garbage collector to delete the pods Sep 21 11:48:57.737: INFO: Deleting ReplicationController externalsvc took: 8.297371ms Sep 21 11:48:58.137: INFO: Terminating ReplicationController externalsvc pods took: 400.885992ms Sep 21 11:49:13.299: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:49:13.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5774" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:27.554 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":242,"skipped":3930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:49:13.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 21 11:49:13.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6116' Sep 21 11:49:14.819: INFO: stderr: "" Sep 21 11:49:14.819: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 21 11:49:14.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-6116' Sep 21 11:49:15.991: INFO: stderr: "" Sep 21 11:49:15.991: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-21T11:49:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-21T11:49:14Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-21T11:49:14Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6116\",\n \"resourceVersion\": \"2074698\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6116/pods/e2e-test-httpd-pod\",\n \"uid\": \"b01e434c-6baf-4937-8515-f565d4e03fe5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vjlzd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vjlzd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vjlzd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T11:49:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T11:49:14Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T11:49:14Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-21T11:49:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.11\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-21T11:49:14Z\"\n }\n}\n" Sep 21 11:49:15.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-6116' Sep 21 11:49:18.895: INFO: stderr: "W0921 11:49:16.820484 3441 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Sep 21 11:49:18.895: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 21 11:49:18.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6116' Sep 21 11:49:33.183: INFO: stderr: "" Sep 21 11:49:33.183: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:49:33.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6116" for this suite. • [SLOW TEST:19.883 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":243,"skipped":3953,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:49:33.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 11:49:49.819: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 11:49:51.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 21 11:49:53.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736285789, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 11:49:57.014: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:49:57.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9170" for this suite. STEP: Destroying namespace "webhook-9170-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.090 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":244,"skipped":3957,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:49:57.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0921 11:50:37.677250 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 21 11:51:39.707: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 21 11:51:39.707: INFO: Deleting pod "simpletest.rc-4jd87" in namespace "gc-8518" Sep 21 11:51:39.854: INFO: Deleting pod "simpletest.rc-4lqzh" in namespace "gc-8518" Sep 21 11:51:39.940: INFO: Deleting pod "simpletest.rc-6zdsr" in namespace "gc-8518" Sep 21 11:51:40.338: INFO: Deleting pod "simpletest.rc-8bcp2" in namespace "gc-8518" Sep 21 11:51:40.681: INFO: Deleting pod "simpletest.rc-gppl7" in namespace "gc-8518" Sep 21 11:51:40.798: INFO: Deleting pod "simpletest.rc-jhcp6" in namespace "gc-8518" Sep 21 11:51:41.017: INFO: Deleting pod "simpletest.rc-kvtdx" in namespace "gc-8518" Sep 21 11:51:41.172: INFO: Deleting pod "simpletest.rc-m5qdj" in namespace "gc-8518" Sep 21 11:51:41.292: INFO: Deleting pod "simpletest.rc-rrvqh" in namespace "gc-8518" Sep 21 11:51:41.564: INFO: Deleting pod "simpletest.rc-sgh9t" in namespace "gc-8518" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:51:42.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8518" for this suite. • [SLOW TEST:105.331 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":245,"skipped":3968,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:51:42.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 21 11:51:42.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6734' Sep 21 11:51:46.130: INFO: stderr: "" Sep 21 11:51:46.131: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 21 11:51:46.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6734' Sep 21 11:51:53.940: INFO: stderr: "" Sep 21 11:51:53.940: INFO: stdout: "update-demo-nautilus-hjzds update-demo-nautilus-xf5dl " Sep 21 11:51:53.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:51:55.194: INFO: stderr: "" Sep 21 11:51:55.194: INFO: stdout: "true" Sep 21 11:51:55.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjzds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:51:56.583: INFO: stderr: "" Sep 21 11:51:56.583: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 11:51:56.583: INFO: validating pod update-demo-nautilus-hjzds Sep 21 11:51:56.591: INFO: got data: { "image": "nautilus.jpg" } Sep 21 11:51:56.591: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 11:51:56.591: INFO: update-demo-nautilus-hjzds is verified up and running Sep 21 11:51:56.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf5dl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:51:57.793: INFO: stderr: "" Sep 21 11:51:57.794: INFO: stdout: "true" Sep 21 11:51:57.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf5dl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:51:59.066: INFO: stderr: "" Sep 21 11:51:59.066: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 11:51:59.066: INFO: validating pod update-demo-nautilus-xf5dl Sep 21 11:51:59.072: INFO: got data: { "image": "nautilus.jpg" } Sep 21 11:51:59.073: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 11:51:59.073: INFO: update-demo-nautilus-xf5dl is verified up and running STEP: scaling down the replication controller Sep 21 11:51:59.085: INFO: scanned /root for discovery docs: Sep 21 11:51:59.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6734' Sep 21 11:52:00.555: INFO: stderr: "" Sep 21 11:52:00.555: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 21 11:52:00.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6734' Sep 21 11:52:01.885: INFO: stderr: "" Sep 21 11:52:01.885: INFO: stdout: "update-demo-nautilus-hjzds update-demo-nautilus-xf5dl " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 21 11:52:06.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6734' Sep 21 11:52:08.179: INFO: stderr: "" Sep 21 11:52:08.179: INFO: stdout: "update-demo-nautilus-xf5dl " Sep 21 11:52:08.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf5dl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:09.345: INFO: stderr: "" Sep 21 11:52:09.345: INFO: stdout: "true" Sep 21 11:52:09.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf5dl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:10.652: INFO: stderr: "" Sep 21 11:52:10.652: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 11:52:10.652: INFO: validating pod update-demo-nautilus-xf5dl Sep 21 11:52:10.658: INFO: got data: { "image": "nautilus.jpg" } Sep 21 11:52:10.658: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 11:52:10.658: INFO: update-demo-nautilus-xf5dl is verified up and running STEP: scaling up the replication controller Sep 21 11:52:10.667: INFO: scanned /root for discovery docs: Sep 21 11:52:10.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6734' Sep 21 11:52:12.973: INFO: stderr: "" Sep 21 11:52:12.973: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 21 11:52:12.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6734' Sep 21 11:52:14.224: INFO: stderr: "" Sep 21 11:52:14.225: INFO: stdout: "update-demo-nautilus-rx6rj update-demo-nautilus-xf5dl " Sep 21 11:52:14.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rx6rj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:15.407: INFO: stderr: "" Sep 21 11:52:15.407: INFO: stdout: "" Sep 21 11:52:15.407: INFO: update-demo-nautilus-rx6rj is created but not running Sep 21 11:52:20.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6734' Sep 21 11:52:21.735: INFO: stderr: "" Sep 21 11:52:21.735: INFO: stdout: "update-demo-nautilus-rx6rj update-demo-nautilus-xf5dl " Sep 21 11:52:21.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rx6rj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:22.995: INFO: stderr: "" Sep 21 11:52:22.995: INFO: stdout: "true" Sep 21 11:52:22.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rx6rj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:24.233: INFO: stderr: "" Sep 21 11:52:24.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 11:52:24.233: INFO: validating pod update-demo-nautilus-rx6rj Sep 21 11:52:24.239: INFO: got data: { "image": "nautilus.jpg" } Sep 21 11:52:24.239: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 11:52:24.240: INFO: update-demo-nautilus-rx6rj is verified up and running Sep 21 11:52:24.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf5dl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:25.416: INFO: stderr: "" Sep 21 11:52:25.416: INFO: stdout: "true" Sep 21 11:52:25.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf5dl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6734' Sep 21 11:52:26.600: INFO: stderr: "" Sep 21 11:52:26.600: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 21 11:52:26.600: INFO: validating pod update-demo-nautilus-xf5dl Sep 21 11:52:26.606: INFO: got data: { "image": "nautilus.jpg" } Sep 21 11:52:26.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 21 11:52:26.606: INFO: update-demo-nautilus-xf5dl is verified up and running STEP: using delete to clean up resources Sep 21 11:52:26.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6734' Sep 21 11:52:27.925: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 21 11:52:27.926: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 21 11:52:27.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6734' Sep 21 11:52:29.275: INFO: stderr: "No resources found in kubectl-6734 namespace.\n" Sep 21 11:52:29.276: INFO: stdout: "" Sep 21 11:52:29.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6734 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 21 11:52:30.586: INFO: stderr: "" Sep 21 11:52:30.586: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:52:30.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6734" for this suite. • [SLOW TEST:47.945 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":246,"skipped":3973,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:52:30.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 21 11:52:30.776: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075649 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:52:30.777: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075649 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 21 11:52:40.791: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075706 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:52:40.793: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075706 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 21 11:52:50.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075736 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:52:50.806: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075736 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 21 11:53:00.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075766 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:53:00.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-a 4fa8955e-8fc8-4f36-90f0-226e9c80448b 2075766 0 2020-09-21 11:52:30 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-21 11:52:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 21 11:53:10.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-b 2d017ebb-865b-439a-b91f-4ecc947b5298 2075794 0 2020-09-21 11:53:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-21 11:53:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:53:10.832: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-b 2d017ebb-865b-439a-b91f-4ecc947b5298 2075794 0 2020-09-21 11:53:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-21 11:53:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 21 11:53:20.843: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-b 2d017ebb-865b-439a-b91f-4ecc947b5298 2075824 0 2020-09-21 11:53:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-21 11:53:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:53:20.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5711 /api/v1/namespaces/watch-5711/configmaps/e2e-watch-test-configmap-b 2d017ebb-865b-439a-b91f-4ecc947b5298 2075824 0 2020-09-21 11:53:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-21 11:53:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:53:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5711" for this suite. • [SLOW TEST:60.250 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":247,"skipped":3975,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:53:30.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5947 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5947;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5947 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5947;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5947.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5947.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5947.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5947.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5947.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 205.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.205_udp@PTR;check="$$(dig +tcp +noall +answer +search 205.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.205_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5947 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5947;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5947 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5947;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5947.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5947.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5947.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5947.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5947.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 205.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.205_udp@PTR;check="$$(dig +tcp +noall +answer +search 205.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.205_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 11:53:37.209: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.228: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.232: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.237: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.241: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.275: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.280: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.285: INFO: Unable to read jessie_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.290: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.295: INFO: Unable to read jessie_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.304: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.309: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:37.335: INFO: Lookups using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5947 wheezy_tcp@dns-test-service.dns-5947 wheezy_udp@dns-test-service.dns-5947.svc wheezy_tcp@dns-test-service.dns-5947.svc wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5947 jessie_tcp@dns-test-service.dns-5947 jessie_udp@dns-test-service.dns-5947.svc jessie_tcp@dns-test-service.dns-5947.svc jessie_udp@_http._tcp.dns-test-service.dns-5947.svc jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc] Sep 21 11:53:42.342: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.347: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.356: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.369: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.373: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.399: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.403: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.408: INFO: Unable to read jessie_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.417: INFO: Unable to read jessie_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.424: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:42.454: INFO: Lookups using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5947 wheezy_tcp@dns-test-service.dns-5947 wheezy_udp@dns-test-service.dns-5947.svc wheezy_tcp@dns-test-service.dns-5947.svc wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5947 jessie_tcp@dns-test-service.dns-5947 jessie_udp@dns-test-service.dns-5947.svc jessie_tcp@dns-test-service.dns-5947.svc jessie_udp@_http._tcp.dns-test-service.dns-5947.svc jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc] Sep 21 11:53:47.343: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.349: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.354: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.360: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.365: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.383: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.415: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.420: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.425: INFO: Unable to read jessie_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.435: INFO: Unable to read jessie_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.445: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.450: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:47.494: INFO: Lookups using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5947 wheezy_tcp@dns-test-service.dns-5947 wheezy_udp@dns-test-service.dns-5947.svc wheezy_tcp@dns-test-service.dns-5947.svc wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5947 jessie_tcp@dns-test-service.dns-5947 jessie_udp@dns-test-service.dns-5947.svc jessie_tcp@dns-test-service.dns-5947.svc jessie_udp@_http._tcp.dns-test-service.dns-5947.svc jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc] Sep 21 11:53:52.343: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.349: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.354: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.360: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.365: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.375: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.379: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.412: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.417: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.422: INFO: Unable to read jessie_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.426: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.430: INFO: Unable to read jessie_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.434: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.439: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.444: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:52.472: INFO: Lookups using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5947 wheezy_tcp@dns-test-service.dns-5947 wheezy_udp@dns-test-service.dns-5947.svc wheezy_tcp@dns-test-service.dns-5947.svc wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5947 jessie_tcp@dns-test-service.dns-5947 jessie_udp@dns-test-service.dns-5947.svc jessie_tcp@dns-test-service.dns-5947.svc jessie_udp@_http._tcp.dns-test-service.dns-5947.svc jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc] Sep 21 11:53:57.342: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.347: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.362: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.366: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.371: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.375: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.487: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.492: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.496: INFO: Unable to read jessie_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.500: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.505: INFO: Unable to read jessie_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.514: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:53:57.547: INFO: Lookups using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5947 wheezy_tcp@dns-test-service.dns-5947 wheezy_udp@dns-test-service.dns-5947.svc wheezy_tcp@dns-test-service.dns-5947.svc wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5947 jessie_tcp@dns-test-service.dns-5947 jessie_udp@dns-test-service.dns-5947.svc jessie_tcp@dns-test-service.dns-5947.svc jessie_udp@_http._tcp.dns-test-service.dns-5947.svc jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc] Sep 21 11:54:02.344: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.348: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.354: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.359: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.363: INFO: Unable to read wheezy_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.368: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.372: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.376: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.408: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.412: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.416: INFO: Unable to read jessie_udp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947 from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.425: INFO: Unable to read jessie_udp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.429: INFO: Unable to read jessie_tcp@dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.432: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.436: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc from pod dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285: the server could not find the requested resource (get pods dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285) Sep 21 11:54:02.463: INFO: Lookups using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5947 wheezy_tcp@dns-test-service.dns-5947 wheezy_udp@dns-test-service.dns-5947.svc wheezy_tcp@dns-test-service.dns-5947.svc wheezy_udp@_http._tcp.dns-test-service.dns-5947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5947 jessie_tcp@dns-test-service.dns-5947 jessie_udp@dns-test-service.dns-5947.svc jessie_tcp@dns-test-service.dns-5947.svc jessie_udp@_http._tcp.dns-test-service.dns-5947.svc jessie_tcp@_http._tcp.dns-test-service.dns-5947.svc] Sep 21 11:54:07.486: INFO: DNS probes using dns-5947/dns-test-01ac8fca-d9f0-4793-8abd-2646114ff285 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:54:08.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5947" for this suite. • [SLOW TEST:37.416 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":248,"skipped":3986,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:54:08.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 21 11:54:08.429: INFO: Waiting up to 5m0s for pod "var-expansion-32411815-4ba7-45d7-9964-a136b971bc63" in namespace "var-expansion-5849" to be "Succeeded or Failed" Sep 21 11:54:08.445: INFO: Pod "var-expansion-32411815-4ba7-45d7-9964-a136b971bc63": Phase="Pending", Reason="", readiness=false. Elapsed: 15.264739ms Sep 21 11:54:10.519: INFO: Pod "var-expansion-32411815-4ba7-45d7-9964-a136b971bc63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089172742s Sep 21 11:54:12.525: INFO: Pod "var-expansion-32411815-4ba7-45d7-9964-a136b971bc63": Phase="Running", Reason="", readiness=true. Elapsed: 4.095812973s Sep 21 11:54:14.535: INFO: Pod "var-expansion-32411815-4ba7-45d7-9964-a136b971bc63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105334673s STEP: Saw pod success Sep 21 11:54:14.535: INFO: Pod "var-expansion-32411815-4ba7-45d7-9964-a136b971bc63" satisfied condition "Succeeded or Failed" Sep 21 11:54:14.547: INFO: Trying to get logs from node kali-worker pod var-expansion-32411815-4ba7-45d7-9964-a136b971bc63 container dapi-container: STEP: delete the pod Sep 21 11:54:14.650: INFO: Waiting for pod var-expansion-32411815-4ba7-45d7-9964-a136b971bc63 to disappear Sep 21 11:54:14.667: INFO: Pod var-expansion-32411815-4ba7-45d7-9964-a136b971bc63 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:54:14.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5849" for this suite. • [SLOW TEST:6.404 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":3996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:54:14.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 21 11:54:14.881: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-651 /api/v1/namespaces/watch-651/configmaps/e2e-watch-test-resource-version 99f29cc8-6cc0-426c-9a4d-e69730299ef7 2076079 0 2020-09-21 11:54:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-21 11:54:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 11:54:14.882: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-651 /api/v1/namespaces/watch-651/configmaps/e2e-watch-test-resource-version 99f29cc8-6cc0-426c-9a4d-e69730299ef7 2076080 0 2020-09-21 11:54:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-21 11:54:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:54:14.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-651" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":250,"skipped":4022,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:54:14.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-6607 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6607 to expose endpoints map[] Sep 21 11:54:15.121: INFO: successfully validated that service endpoint-test2 in namespace services-6607 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6607 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6607 to expose endpoints map[pod1:[80]] Sep 21 11:54:19.244: INFO: successfully validated that service endpoint-test2 in namespace services-6607 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-6607 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6607 to expose endpoints map[pod1:[80] pod2:[80]] Sep 21 11:54:23.297: INFO: successfully validated that service endpoint-test2 in namespace services-6607 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-6607 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6607 to expose endpoints map[pod2:[80]] Sep 21 11:54:23.389: INFO: successfully validated that service endpoint-test2 in namespace services-6607 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-6607 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6607 to expose endpoints map[] Sep 21 11:54:23.428: INFO: successfully validated that service endpoint-test2 in namespace services-6607 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:54:23.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6607" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.783 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":251,"skipped":4027,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:54:23.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:54:23.816: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fd56a60f-98c8-48e2-8e70-7e01147fdc67" in namespace "security-context-test-3906" to be "Succeeded or Failed" Sep 21 11:54:23.834: INFO: Pod "busybox-user-65534-fd56a60f-98c8-48e2-8e70-7e01147fdc67": Phase="Pending", Reason="", readiness=false. Elapsed: 17.597471ms Sep 21 11:54:25.841: INFO: Pod "busybox-user-65534-fd56a60f-98c8-48e2-8e70-7e01147fdc67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024933935s Sep 21 11:54:27.979: INFO: Pod "busybox-user-65534-fd56a60f-98c8-48e2-8e70-7e01147fdc67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162873847s Sep 21 11:54:27.980: INFO: Pod "busybox-user-65534-fd56a60f-98c8-48e2-8e70-7e01147fdc67" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:54:27.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3906" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":252,"skipped":4045,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:54:28.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:55:02.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-262" for this suite. • [SLOW TEST:34.242 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":253,"skipped":4045,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:55:02.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:55:02.389: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8860a531-7faf-44c9-96c6-cfddc98d2330" in namespace "security-context-test-2131" to be "Succeeded or Failed" Sep 21 11:55:02.415: INFO: Pod "alpine-nnp-false-8860a531-7faf-44c9-96c6-cfddc98d2330": Phase="Pending", Reason="", readiness=false. Elapsed: 26.017004ms Sep 21 11:55:04.424: INFO: Pod "alpine-nnp-false-8860a531-7faf-44c9-96c6-cfddc98d2330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035393155s Sep 21 11:55:06.436: INFO: Pod "alpine-nnp-false-8860a531-7faf-44c9-96c6-cfddc98d2330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047539095s Sep 21 11:55:06.437: INFO: Pod "alpine-nnp-false-8860a531-7faf-44c9-96c6-cfddc98d2330" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:55:06.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2131" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4063,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:55:06.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:55:06.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config version' Sep 21 11:55:07.701: INFO: stderr: "" Sep 21 11:55:07.701: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.2\", GitCommit:\"f5743093fd1c663cb0cbc89748f730662345d44d\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T13:41:02Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:55:07.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9975" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":255,"skipped":4077,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:55:07.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4501.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4501.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 21 11:55:15.953: INFO: DNS probes using dns-4501/dns-test-9636ca9b-ca60-4d70-8eae-802f93fd30b8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:55:15.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4501" for this suite. • [SLOW TEST:8.329 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":256,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:55:16.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3705 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 21 11:55:16.668: INFO: Found 0 stateful pods, waiting for 3 Sep 21 11:55:26.686: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:55:26.686: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:55:26.686: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 21 11:55:36.680: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:55:36.681: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:55:36.681: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:55:36.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3705 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 11:55:38.262: INFO: stderr: "I0921 11:55:38.100676 3942 log.go:181] (0x303c000) (0x303c070) Create stream\nI0921 11:55:38.104288 3942 log.go:181] (0x303c000) (0x303c070) Stream added, broadcasting: 1\nI0921 11:55:38.112638 3942 log.go:181] (0x303c000) Reply frame received for 1\nI0921 11:55:38.113943 3942 log.go:181] (0x303c000) (0x27c78f0) Create stream\nI0921 11:55:38.114121 3942 log.go:181] (0x303c000) (0x27c78f0) Stream added, broadcasting: 3\nI0921 11:55:38.126894 3942 log.go:181] (0x303c000) Reply frame received for 3\nI0921 11:55:38.127239 3942 log.go:181] (0x303c000) (0x2e9a070) Create stream\nI0921 11:55:38.127330 3942 log.go:181] (0x303c000) (0x2e9a070) Stream added, broadcasting: 5\nI0921 11:55:38.128887 3942 log.go:181] (0x303c000) Reply frame received for 5\nI0921 11:55:38.216837 3942 log.go:181] (0x303c000) Data frame received for 5\nI0921 11:55:38.217188 3942 log.go:181] (0x2e9a070) (5) Data frame handling\nI0921 11:55:38.217864 3942 log.go:181] (0x2e9a070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 11:55:38.242515 3942 log.go:181] (0x303c000) Data frame received for 3\nI0921 11:55:38.242638 3942 log.go:181] (0x27c78f0) (3) Data frame handling\nI0921 11:55:38.242747 3942 log.go:181] (0x303c000) Data frame received for 5\nI0921 11:55:38.242944 3942 log.go:181] (0x2e9a070) (5) Data frame handling\nI0921 11:55:38.243067 3942 log.go:181] (0x27c78f0) (3) Data frame sent\nI0921 11:55:38.243167 3942 log.go:181] (0x303c000) Data frame received for 3\nI0921 11:55:38.243220 3942 log.go:181] (0x27c78f0) (3) Data frame handling\nI0921 11:55:38.244792 3942 log.go:181] (0x303c000) Data frame received for 1\nI0921 11:55:38.244994 3942 log.go:181] (0x303c070) (1) Data frame handling\nI0921 11:55:38.245186 3942 log.go:181] (0x303c070) (1) Data frame sent\nI0921 11:55:38.247568 3942 log.go:181] (0x303c000) (0x303c070) Stream removed, broadcasting: 1\nI0921 11:55:38.248822 3942 log.go:181] (0x303c000) Go away received\nI0921 11:55:38.252232 3942 log.go:181] (0x303c000) (0x303c070) Stream removed, broadcasting: 1\nI0921 11:55:38.252547 3942 log.go:181] (0x303c000) (0x27c78f0) Stream removed, broadcasting: 3\nI0921 11:55:38.252730 3942 log.go:181] (0x303c000) (0x2e9a070) Stream removed, broadcasting: 5\n" Sep 21 11:55:38.262: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 11:55:38.262: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 21 11:55:48.318: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 21 11:55:58.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3705 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:55:59.920: INFO: stderr: "I0921 11:55:59.805023 3962 log.go:181] (0x2da8000) (0x2da8070) Create stream\nI0921 11:55:59.807824 3962 log.go:181] (0x2da8000) (0x2da8070) Stream added, broadcasting: 1\nI0921 11:55:59.818344 3962 log.go:181] (0x2da8000) Reply frame received for 1\nI0921 11:55:59.819043 3962 log.go:181] (0x2da8000) (0x2da8310) Create stream\nI0921 11:55:59.819132 3962 log.go:181] (0x2da8000) (0x2da8310) Stream added, broadcasting: 3\nI0921 11:55:59.820499 3962 log.go:181] (0x2da8000) Reply frame received for 3\nI0921 11:55:59.820700 3962 log.go:181] (0x2da8000) (0x2d6b2d0) Create stream\nI0921 11:55:59.820764 3962 log.go:181] (0x2da8000) (0x2d6b2d0) Stream added, broadcasting: 5\nI0921 11:55:59.822094 3962 log.go:181] (0x2da8000) Reply frame received for 5\nI0921 11:55:59.903139 3962 log.go:181] (0x2da8000) Data frame received for 5\nI0921 11:55:59.903429 3962 log.go:181] (0x2d6b2d0) (5) Data frame handling\nI0921 11:55:59.903593 3962 log.go:181] (0x2da8000) Data frame received for 3\nI0921 11:55:59.903859 3962 log.go:181] (0x2da8310) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0921 11:55:59.904225 3962 log.go:181] (0x2d6b2d0) (5) Data frame sent\nI0921 11:55:59.904505 3962 log.go:181] (0x2da8310) (3) Data frame sent\nI0921 11:55:59.904734 3962 log.go:181] (0x2da8000) Data frame received for 3\nI0921 11:55:59.904834 3962 log.go:181] (0x2da8310) (3) Data frame handling\nI0921 11:55:59.905114 3962 log.go:181] (0x2da8000) Data frame received for 5\nI0921 11:55:59.905232 3962 log.go:181] (0x2d6b2d0) (5) Data frame handling\nI0921 11:55:59.905329 3962 log.go:181] (0x2da8000) Data frame received for 1\nI0921 11:55:59.905453 3962 log.go:181] (0x2da8070) (1) Data frame handling\nI0921 11:55:59.905586 3962 log.go:181] (0x2da8070) (1) Data frame sent\nI0921 11:55:59.907474 3962 log.go:181] (0x2da8000) (0x2da8070) Stream removed, broadcasting: 1\nI0921 11:55:59.909722 3962 log.go:181] (0x2da8000) Go away received\nI0921 11:55:59.912785 3962 log.go:181] (0x2da8000) (0x2da8070) Stream removed, broadcasting: 1\nI0921 11:55:59.912990 3962 log.go:181] (0x2da8000) (0x2da8310) Stream removed, broadcasting: 3\nI0921 11:55:59.913157 3962 log.go:181] (0x2da8000) (0x2d6b2d0) Stream removed, broadcasting: 5\n" Sep 21 11:55:59.920: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 11:55:59.921: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 11:56:19.962: INFO: Waiting for StatefulSet statefulset-3705/ss2 to complete update Sep 21 11:56:19.963: INFO: Waiting for Pod statefulset-3705/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Sep 21 11:56:29.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3705 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 11:56:31.498: INFO: stderr: "I0921 11:56:31.366384 3982 log.go:181] (0x30200e0) (0x30201c0) Create stream\nI0921 11:56:31.368073 3982 log.go:181] (0x30200e0) (0x30201c0) Stream added, broadcasting: 1\nI0921 11:56:31.378045 3982 log.go:181] (0x30200e0) Reply frame received for 1\nI0921 11:56:31.378843 3982 log.go:181] (0x30200e0) (0x25dc620) Create stream\nI0921 11:56:31.378962 3982 log.go:181] (0x30200e0) (0x25dc620) Stream added, broadcasting: 3\nI0921 11:56:31.380583 3982 log.go:181] (0x30200e0) Reply frame received for 3\nI0921 11:56:31.380936 3982 log.go:181] (0x30200e0) (0x3097420) Create stream\nI0921 11:56:31.381047 3982 log.go:181] (0x30200e0) (0x3097420) Stream added, broadcasting: 5\nI0921 11:56:31.382354 3982 log.go:181] (0x30200e0) Reply frame received for 5\nI0921 11:56:31.448047 3982 log.go:181] (0x30200e0) Data frame received for 5\nI0921 11:56:31.448370 3982 log.go:181] (0x3097420) (5) Data frame handling\nI0921 11:56:31.448848 3982 log.go:181] (0x3097420) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 11:56:31.482424 3982 log.go:181] (0x30200e0) Data frame received for 3\nI0921 11:56:31.482548 3982 log.go:181] (0x25dc620) (3) Data frame handling\nI0921 11:56:31.482648 3982 log.go:181] (0x25dc620) (3) Data frame sent\nI0921 11:56:31.482731 3982 log.go:181] (0x30200e0) Data frame received for 3\nI0921 11:56:31.482850 3982 log.go:181] (0x30200e0) Data frame received for 5\nI0921 11:56:31.483068 3982 log.go:181] (0x3097420) (5) Data frame handling\nI0921 11:56:31.483202 3982 log.go:181] (0x25dc620) (3) Data frame handling\nI0921 11:56:31.484698 3982 log.go:181] (0x30200e0) Data frame received for 1\nI0921 11:56:31.484823 3982 log.go:181] (0x30201c0) (1) Data frame handling\nI0921 11:56:31.484927 3982 log.go:181] (0x30201c0) (1) Data frame sent\nI0921 11:56:31.485619 3982 log.go:181] (0x30200e0) (0x30201c0) Stream removed, broadcasting: 1\nI0921 11:56:31.487492 3982 log.go:181] (0x30200e0) Go away received\nI0921 11:56:31.490288 3982 log.go:181] (0x30200e0) (0x30201c0) Stream removed, broadcasting: 1\nI0921 11:56:31.490449 3982 log.go:181] (0x30200e0) (0x25dc620) Stream removed, broadcasting: 3\nI0921 11:56:31.490573 3982 log.go:181] (0x30200e0) (0x3097420) Stream removed, broadcasting: 5\n" Sep 21 11:56:31.499: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 11:56:31.499: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 11:56:41.551: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 21 11:56:51.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3705 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:56:53.179: INFO: stderr: "I0921 11:56:53.051998 4003 log.go:181] (0x29e7b20) (0x29e7b90) Create stream\nI0921 11:56:53.054913 4003 log.go:181] (0x29e7b20) (0x29e7b90) Stream added, broadcasting: 1\nI0921 11:56:53.066979 4003 log.go:181] (0x29e7b20) Reply frame received for 1\nI0921 11:56:53.067539 4003 log.go:181] (0x29e7b20) (0x2d96070) Create stream\nI0921 11:56:53.067627 4003 log.go:181] (0x29e7b20) (0x2d96070) Stream added, broadcasting: 3\nI0921 11:56:53.069872 4003 log.go:181] (0x29e7b20) Reply frame received for 3\nI0921 11:56:53.070282 4003 log.go:181] (0x29e7b20) (0x2d962a0) Create stream\nI0921 11:56:53.070401 4003 log.go:181] (0x29e7b20) (0x2d962a0) Stream added, broadcasting: 5\nI0921 11:56:53.072277 4003 log.go:181] (0x29e7b20) Reply frame received for 5\nI0921 11:56:53.159334 4003 log.go:181] (0x29e7b20) Data frame received for 5\nI0921 11:56:53.159780 4003 log.go:181] (0x2d962a0) (5) Data frame handling\nI0921 11:56:53.160096 4003 log.go:181] (0x29e7b20) Data frame received for 3\nI0921 11:56:53.160428 4003 log.go:181] (0x2d962a0) (5) Data frame sent\nI0921 11:56:53.160847 4003 log.go:181] (0x2d96070) (3) Data frame handling\nI0921 11:56:53.161176 4003 log.go:181] (0x29e7b20) Data frame received for 1\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0921 11:56:53.161399 4003 log.go:181] (0x29e7b90) (1) Data frame handling\nI0921 11:56:53.161565 4003 log.go:181] (0x2d96070) (3) Data frame sent\nI0921 11:56:53.161865 4003 log.go:181] (0x29e7b90) (1) Data frame sent\nI0921 11:56:53.162019 4003 log.go:181] (0x29e7b20) Data frame received for 3\nI0921 11:56:53.162175 4003 log.go:181] (0x2d96070) (3) Data frame handling\nI0921 11:56:53.162321 4003 log.go:181] (0x29e7b20) Data frame received for 5\nI0921 11:56:53.162507 4003 log.go:181] (0x2d962a0) (5) Data frame handling\nI0921 11:56:53.165214 4003 log.go:181] (0x29e7b20) (0x29e7b90) Stream removed, broadcasting: 1\nI0921 11:56:53.167056 4003 log.go:181] (0x29e7b20) Go away received\nI0921 11:56:53.170562 4003 log.go:181] (0x29e7b20) (0x29e7b90) Stream removed, broadcasting: 1\nI0921 11:56:53.170769 4003 log.go:181] (0x29e7b20) (0x2d96070) Stream removed, broadcasting: 3\nI0921 11:56:53.170954 4003 log.go:181] (0x29e7b20) (0x2d962a0) Stream removed, broadcasting: 5\n" Sep 21 11:56:53.179: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 11:56:53.179: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 11:57:03.315: INFO: Waiting for StatefulSet statefulset-3705/ss2 to complete update Sep 21 11:57:03.315: INFO: Waiting for Pod statefulset-3705/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 21 11:57:03.316: INFO: Waiting for Pod statefulset-3705/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 21 11:57:13.336: INFO: Waiting for StatefulSet statefulset-3705/ss2 to complete update Sep 21 11:57:13.336: INFO: Waiting for Pod statefulset-3705/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 21 11:57:23.334: INFO: Deleting all statefulset in ns statefulset-3705 Sep 21 11:57:23.339: INFO: Scaling statefulset ss2 to 0 Sep 21 11:57:43.379: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 11:57:43.385: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:57:43.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3705" for this suite. • [SLOW TEST:147.316 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":257,"skipped":4109,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:57:43.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 21 11:57:47.655: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:57:47.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5435" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:57:47.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-nc2p STEP: Creating a pod to test atomic-volume-subpath Sep 21 11:57:47.821: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nc2p" in namespace "subpath-3236" to be "Succeeded or Failed" Sep 21 11:57:47.843: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Pending", Reason="", readiness=false. Elapsed: 22.217024ms Sep 21 11:57:49.994: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172764093s Sep 21 11:57:52.001: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 4.180105708s Sep 21 11:57:54.008: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 6.187341296s Sep 21 11:57:56.016: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 8.194895403s Sep 21 11:57:58.023: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 10.202184083s Sep 21 11:58:00.031: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 12.209432683s Sep 21 11:58:02.037: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 14.216158814s Sep 21 11:58:04.045: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 16.223742947s Sep 21 11:58:06.053: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 18.231848693s Sep 21 11:58:08.061: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 20.239904116s Sep 21 11:58:10.069: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Running", Reason="", readiness=true. Elapsed: 22.248080501s Sep 21 11:58:12.077: INFO: Pod "pod-subpath-test-configmap-nc2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.255930198s STEP: Saw pod success Sep 21 11:58:12.077: INFO: Pod "pod-subpath-test-configmap-nc2p" satisfied condition "Succeeded or Failed" Sep 21 11:58:12.083: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-nc2p container test-container-subpath-configmap-nc2p: STEP: delete the pod Sep 21 11:58:12.132: INFO: Waiting for pod pod-subpath-test-configmap-nc2p to disappear Sep 21 11:58:12.137: INFO: Pod pod-subpath-test-configmap-nc2p no longer exists STEP: Deleting pod pod-subpath-test-configmap-nc2p Sep 21 11:58:12.137: INFO: Deleting pod "pod-subpath-test-configmap-nc2p" in namespace "subpath-3236" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:58:12.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3236" for this suite. • [SLOW TEST:24.494 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":259,"skipped":4147,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:58:12.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 11:58:12.341: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2cf7a77b-b874-43cb-a709-1502dea914b0", Controller:(*bool)(0x680c78a), BlockOwnerDeletion:(*bool)(0x680c78b)}} Sep 21 11:58:12.366: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"01ce8e01-b62d-47a6-873e-068562ddc5fa", Controller:(*bool)(0x680ca2a), BlockOwnerDeletion:(*bool)(0x680ca2b)}} Sep 21 11:58:12.432: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ea863807-849d-49cc-943c-e52d0149e00d", Controller:(*bool)(0x680cf2a), BlockOwnerDeletion:(*bool)(0x680cf2b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:58:17.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8769" for this suite. • [SLOW TEST:5.323 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":260,"skipped":4160,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:58:17.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 21 11:58:22.142: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2291 pod-service-account-a60bc830-fbae-4b72-a54d-5b3e86339580 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 21 11:58:23.684: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2291 pod-service-account-a60bc830-fbae-4b72-a54d-5b3e86339580 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 21 11:58:25.329: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2291 pod-service-account-a60bc830-fbae-4b72-a54d-5b3e86339580 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:58:26.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2291" for this suite. • [SLOW TEST:9.355 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":261,"skipped":4161,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:58:26.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 21 11:58:31.083: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 11:58:31.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9889" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 11:58:31.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7771 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-7771 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7771 Sep 21 11:58:31.358: INFO: Found 0 stateful pods, waiting for 1 Sep 21 11:58:41.372: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 21 11:58:41.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 11:58:43.040: INFO: stderr: "I0921 11:58:42.833427 4083 log.go:181] (0x2630070) (0x26300e0) Create stream\nI0921 11:58:42.835213 4083 log.go:181] (0x2630070) (0x26300e0) Stream added, broadcasting: 1\nI0921 11:58:42.845910 4083 log.go:181] (0x2630070) Reply frame received for 1\nI0921 11:58:42.846808 4083 log.go:181] (0x2630070) (0x26302a0) Create stream\nI0921 11:58:42.846921 4083 log.go:181] (0x2630070) (0x26302a0) Stream added, broadcasting: 3\nI0921 11:58:42.848915 4083 log.go:181] (0x2630070) Reply frame received for 3\nI0921 11:58:42.849211 4083 log.go:181] (0x2630070) (0x2630460) Create stream\nI0921 11:58:42.849284 4083 log.go:181] (0x2630070) (0x2630460) Stream added, broadcasting: 5\nI0921 11:58:42.850696 4083 log.go:181] (0x2630070) Reply frame received for 5\nI0921 11:58:42.950344 4083 log.go:181] (0x2630070) Data frame received for 5\nI0921 11:58:42.950692 4083 log.go:181] (0x2630460) (5) Data frame handling\nI0921 11:58:42.951369 4083 log.go:181] (0x2630460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 11:58:43.007328 4083 log.go:181] (0x2630070) Data frame received for 3\nI0921 11:58:43.007446 4083 log.go:181] (0x26302a0) (3) Data frame handling\nI0921 11:58:43.007553 4083 log.go:181] (0x2630070) Data frame received for 5\nI0921 11:58:43.007682 4083 log.go:181] (0x2630460) (5) Data frame handling\nI0921 11:58:43.008121 4083 log.go:181] (0x26302a0) (3) Data frame sent\nI0921 11:58:43.008495 4083 log.go:181] (0x2630070) Data frame received for 3\nI0921 11:58:43.008619 4083 log.go:181] (0x26302a0) (3) Data frame handling\nI0921 11:58:43.009622 4083 log.go:181] (0x2630070) Data frame received for 1\nI0921 11:58:43.009738 4083 log.go:181] (0x26300e0) (1) Data frame handling\nI0921 11:58:43.009858 4083 log.go:181] (0x26300e0) (1) Data frame sent\nI0921 11:58:43.010661 4083 log.go:181] (0x2630070) (0x26300e0) Stream removed, broadcasting: 1\nI0921 11:58:43.014230 4083 log.go:181] (0x2630070) Go away received\nI0921 11:58:43.032038 4083 log.go:181] (0x2630070) (0x26300e0) Stream removed, broadcasting: 1\nI0921 11:58:43.032435 4083 log.go:181] (0x2630070) (0x26302a0) Stream removed, broadcasting: 3\nI0921 11:58:43.032671 4083 log.go:181] (0x2630070) (0x2630460) Stream removed, broadcasting: 5\n" Sep 21 11:58:43.041: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 11:58:43.041: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 11:58:43.047: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 21 11:58:53.057: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 21 11:58:53.057: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 11:58:53.099: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:58:53.101: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC }] Sep 21 11:58:53.102: INFO: Sep 21 11:58:53.102: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 21 11:58:54.110: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.970900975s Sep 21 11:58:55.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.963265193s Sep 21 11:58:56.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.703732276s Sep 21 11:58:57.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.561881217s Sep 21 11:58:58.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.531793687s Sep 21 11:58:59.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.508434775s Sep 21 11:59:00.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.495419216s Sep 21 11:59:01.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.486482069s Sep 21 11:59:02.607: INFO: Verifying statefulset ss doesn't scale past 3 for another 475.655442ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7771 Sep 21 11:59:03.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:59:05.108: INFO: stderr: "I0921 11:59:04.996429 4103 log.go:181] (0x30360e0) (0x3036150) Create stream\nI0921 11:59:04.999648 4103 log.go:181] (0x30360e0) (0x3036150) Stream added, broadcasting: 1\nI0921 11:59:05.009189 4103 log.go:181] (0x30360e0) Reply frame received for 1\nI0921 11:59:05.009834 4103 log.go:181] (0x30360e0) (0x2d24070) Create stream\nI0921 11:59:05.009917 4103 log.go:181] (0x30360e0) (0x2d24070) Stream added, broadcasting: 3\nI0921 11:59:05.011297 4103 log.go:181] (0x30360e0) Reply frame received for 3\nI0921 11:59:05.011513 4103 log.go:181] (0x30360e0) (0x3036310) Create stream\nI0921 11:59:05.011583 4103 log.go:181] (0x30360e0) (0x3036310) Stream added, broadcasting: 5\nI0921 11:59:05.012878 4103 log.go:181] (0x30360e0) Reply frame received for 5\nI0921 11:59:05.088082 4103 log.go:181] (0x30360e0) Data frame received for 5\nI0921 11:59:05.088468 4103 log.go:181] (0x30360e0) Data frame received for 3\nI0921 11:59:05.088693 4103 log.go:181] (0x2d24070) (3) Data frame handling\nI0921 11:59:05.088942 4103 log.go:181] (0x30360e0) Data frame received for 1\nI0921 11:59:05.089117 4103 log.go:181] (0x3036150) (1) Data frame handling\nI0921 11:59:05.089422 4103 log.go:181] (0x3036310) (5) Data frame handling\nI0921 11:59:05.090221 4103 log.go:181] (0x3036150) (1) Data frame sent\nI0921 11:59:05.090368 4103 log.go:181] (0x2d24070) (3) Data frame sent\nI0921 11:59:05.090635 4103 log.go:181] (0x3036310) (5) Data frame sent\nI0921 11:59:05.090944 4103 log.go:181] (0x30360e0) Data frame received for 5\nI0921 11:59:05.091081 4103 log.go:181] (0x3036310) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0921 11:59:05.091279 4103 log.go:181] (0x30360e0) Data frame received for 3\nI0921 11:59:05.091388 4103 log.go:181] (0x2d24070) (3) Data frame handling\nI0921 11:59:05.092706 4103 log.go:181] (0x30360e0) (0x3036150) Stream removed, broadcasting: 1\nI0921 11:59:05.095087 4103 log.go:181] (0x30360e0) Go away received\nI0921 11:59:05.096649 4103 log.go:181] (0x30360e0) (0x3036150) Stream removed, broadcasting: 1\nI0921 11:59:05.096896 4103 log.go:181] (0x30360e0) (0x2d24070) Stream removed, broadcasting: 3\nI0921 11:59:05.097021 4103 log.go:181] (0x30360e0) (0x3036310) Stream removed, broadcasting: 5\n" Sep 21 11:59:05.109: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 11:59:05.109: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 11:59:05.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:59:06.632: INFO: stderr: "I0921 11:59:06.487154 4123 log.go:181] (0x25a4230) (0x25a42a0) Create stream\nI0921 11:59:06.489317 4123 log.go:181] (0x25a4230) (0x25a42a0) Stream added, broadcasting: 1\nI0921 11:59:06.504421 4123 log.go:181] (0x25a4230) Reply frame received for 1\nI0921 11:59:06.504995 4123 log.go:181] (0x25a4230) (0x25a45b0) Create stream\nI0921 11:59:06.505077 4123 log.go:181] (0x25a4230) (0x25a45b0) Stream added, broadcasting: 3\nI0921 11:59:06.506676 4123 log.go:181] (0x25a4230) Reply frame received for 3\nI0921 11:59:06.506941 4123 log.go:181] (0x25a4230) (0x28988c0) Create stream\nI0921 11:59:06.507009 4123 log.go:181] (0x25a4230) (0x28988c0) Stream added, broadcasting: 5\nI0921 11:59:06.508472 4123 log.go:181] (0x25a4230) Reply frame received for 5\nI0921 11:59:06.601473 4123 log.go:181] (0x25a4230) Data frame received for 3\nI0921 11:59:06.604057 4123 log.go:181] (0x25a45b0) (3) Data frame handling\nI0921 11:59:06.604792 4123 log.go:181] (0x25a45b0) (3) Data frame sent\nI0921 11:59:06.605008 4123 log.go:181] (0x25a4230) Data frame received for 3\nI0921 11:59:06.605105 4123 log.go:181] (0x25a45b0) (3) Data frame handling\nI0921 11:59:06.610036 4123 log.go:181] (0x25a4230) Data frame received for 5\nI0921 11:59:06.610222 4123 log.go:181] (0x28988c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0921 11:59:06.612369 4123 log.go:181] (0x28988c0) (5) Data frame sent\nI0921 11:59:06.612522 4123 log.go:181] (0x25a4230) Data frame received for 5\nI0921 11:59:06.612606 4123 log.go:181] (0x28988c0) (5) Data frame handling\nI0921 11:59:06.615050 4123 log.go:181] (0x25a4230) Data frame received for 1\nI0921 11:59:06.615733 4123 log.go:181] (0x25a42a0) (1) Data frame handling\nI0921 11:59:06.615922 4123 log.go:181] (0x25a42a0) (1) Data frame sent\nI0921 11:59:06.621286 4123 log.go:181] (0x25a4230) (0x25a42a0) Stream removed, broadcasting: 1\nI0921 11:59:06.622833 4123 log.go:181] (0x25a4230) Go away received\nI0921 11:59:06.625038 4123 log.go:181] (0x25a4230) (0x25a42a0) Stream removed, broadcasting: 1\nI0921 11:59:06.625185 4123 log.go:181] (0x25a4230) (0x25a45b0) Stream removed, broadcasting: 3\nI0921 11:59:06.625302 4123 log.go:181] (0x25a4230) (0x28988c0) Stream removed, broadcasting: 5\n" Sep 21 11:59:06.633: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 11:59:06.633: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 11:59:06.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:59:08.078: INFO: stderr: "I0921 11:59:07.963894 4144 log.go:181] (0x2b436c0) (0x2b43730) Create stream\nI0921 11:59:07.966832 4144 log.go:181] (0x2b436c0) (0x2b43730) Stream added, broadcasting: 1\nI0921 11:59:07.978218 4144 log.go:181] (0x2b436c0) Reply frame received for 1\nI0921 11:59:07.978659 4144 log.go:181] (0x2b436c0) (0x2b438f0) Create stream\nI0921 11:59:07.978717 4144 log.go:181] (0x2b436c0) (0x2b438f0) Stream added, broadcasting: 3\nI0921 11:59:07.979952 4144 log.go:181] (0x2b436c0) Reply frame received for 3\nI0921 11:59:07.980201 4144 log.go:181] (0x2b436c0) (0x2b43ab0) Create stream\nI0921 11:59:07.980271 4144 log.go:181] (0x2b436c0) (0x2b43ab0) Stream added, broadcasting: 5\nI0921 11:59:07.982024 4144 log.go:181] (0x2b436c0) Reply frame received for 5\nI0921 11:59:08.058717 4144 log.go:181] (0x2b436c0) Data frame received for 1\nI0921 11:59:08.059476 4144 log.go:181] (0x2b436c0) Data frame received for 5\nI0921 11:59:08.059653 4144 log.go:181] (0x2b43730) (1) Data frame handling\nI0921 11:59:08.059931 4144 log.go:181] (0x2b436c0) Data frame received for 3\nI0921 11:59:08.060132 4144 log.go:181] (0x2b438f0) (3) Data frame handling\nI0921 11:59:08.060329 4144 log.go:181] (0x2b43ab0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0921 11:59:08.061506 4144 log.go:181] (0x2b438f0) (3) Data frame sent\nI0921 11:59:08.061590 4144 log.go:181] (0x2b43ab0) (5) Data frame sent\nI0921 11:59:08.061916 4144 log.go:181] (0x2b43730) (1) Data frame sent\nI0921 11:59:08.062249 4144 log.go:181] (0x2b436c0) Data frame received for 5\nI0921 11:59:08.062469 4144 log.go:181] (0x2b43ab0) (5) Data frame handling\nI0921 11:59:08.062606 4144 log.go:181] (0x2b436c0) Data frame received for 3\nI0921 11:59:08.062746 4144 log.go:181] (0x2b438f0) (3) Data frame handling\nI0921 11:59:08.063794 4144 log.go:181] (0x2b436c0) (0x2b43730) Stream removed, broadcasting: 1\nI0921 11:59:08.066149 4144 log.go:181] (0x2b436c0) Go away received\nI0921 11:59:08.069136 4144 log.go:181] (0x2b436c0) (0x2b43730) Stream removed, broadcasting: 1\nI0921 11:59:08.069404 4144 log.go:181] (0x2b436c0) (0x2b438f0) Stream removed, broadcasting: 3\nI0921 11:59:08.069632 4144 log.go:181] (0x2b436c0) (0x2b43ab0) Stream removed, broadcasting: 5\n" Sep 21 11:59:08.079: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 21 11:59:08.080: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 21 11:59:08.089: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:59:08.089: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 21 11:59:08.089: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 21 11:59:08.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 11:59:09.653: INFO: stderr: "I0921 11:59:09.525286 4164 log.go:181] (0x2f2e000) (0x2f2e070) Create stream\nI0921 11:59:09.527562 4164 log.go:181] (0x2f2e000) (0x2f2e070) Stream added, broadcasting: 1\nI0921 11:59:09.537620 4164 log.go:181] (0x2f2e000) Reply frame received for 1\nI0921 11:59:09.538317 4164 log.go:181] (0x2f2e000) (0x2f2e1c0) Create stream\nI0921 11:59:09.538426 4164 log.go:181] (0x2f2e000) (0x2f2e1c0) Stream added, broadcasting: 3\nI0921 11:59:09.540557 4164 log.go:181] (0x2f2e000) Reply frame received for 3\nI0921 11:59:09.540865 4164 log.go:181] (0x2f2e000) (0x25fba40) Create stream\nI0921 11:59:09.540934 4164 log.go:181] (0x2f2e000) (0x25fba40) Stream added, broadcasting: 5\nI0921 11:59:09.542262 4164 log.go:181] (0x2f2e000) Reply frame received for 5\nI0921 11:59:09.635433 4164 log.go:181] (0x2f2e000) Data frame received for 3\nI0921 11:59:09.635589 4164 log.go:181] (0x2f2e000) Data frame received for 1\nI0921 11:59:09.635821 4164 log.go:181] (0x2f2e000) Data frame received for 5\nI0921 11:59:09.636070 4164 log.go:181] (0x25fba40) (5) Data frame handling\nI0921 11:59:09.636288 4164 log.go:181] (0x2f2e070) (1) Data frame handling\nI0921 11:59:09.636604 4164 log.go:181] (0x2f2e1c0) (3) Data frame handling\nI0921 11:59:09.637940 4164 log.go:181] (0x2f2e1c0) (3) Data frame sent\nI0921 11:59:09.638116 4164 log.go:181] (0x25fba40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 11:59:09.638448 4164 log.go:181] (0x2f2e070) (1) Data frame sent\nI0921 11:59:09.638790 4164 log.go:181] (0x2f2e000) Data frame received for 3\nI0921 11:59:09.638986 4164 log.go:181] (0x2f2e1c0) (3) Data frame handling\nI0921 11:59:09.639238 4164 log.go:181] (0x2f2e000) Data frame received for 5\nI0921 11:59:09.639382 4164 log.go:181] (0x25fba40) (5) Data frame handling\nI0921 11:59:09.640543 4164 log.go:181] (0x2f2e000) (0x2f2e070) Stream removed, broadcasting: 1\nI0921 11:59:09.642691 4164 log.go:181] (0x2f2e000) Go away received\nI0921 11:59:09.645263 4164 log.go:181] (0x2f2e000) (0x2f2e070) Stream removed, broadcasting: 1\nI0921 11:59:09.645503 4164 log.go:181] (0x2f2e000) (0x2f2e1c0) Stream removed, broadcasting: 3\nI0921 11:59:09.645667 4164 log.go:181] (0x2f2e000) (0x25fba40) Stream removed, broadcasting: 5\n" Sep 21 11:59:09.654: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 11:59:09.654: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 11:59:09.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 11:59:11.230: INFO: stderr: "I0921 11:59:11.097450 4184 log.go:181] (0x25a4230) (0x25a42a0) Create stream\nI0921 11:59:11.100582 4184 log.go:181] (0x25a4230) (0x25a42a0) Stream added, broadcasting: 1\nI0921 11:59:11.109468 4184 log.go:181] (0x25a4230) Reply frame received for 1\nI0921 11:59:11.109914 4184 log.go:181] (0x25a4230) (0x247c770) Create stream\nI0921 11:59:11.109971 4184 log.go:181] (0x25a4230) (0x247c770) Stream added, broadcasting: 3\nI0921 11:59:11.111546 4184 log.go:181] (0x25a4230) Reply frame received for 3\nI0921 11:59:11.111985 4184 log.go:181] (0x25a4230) (0x2894ee0) Create stream\nI0921 11:59:11.112095 4184 log.go:181] (0x25a4230) (0x2894ee0) Stream added, broadcasting: 5\nI0921 11:59:11.113601 4184 log.go:181] (0x25a4230) Reply frame received for 5\nI0921 11:59:11.183199 4184 log.go:181] (0x25a4230) Data frame received for 5\nI0921 11:59:11.183482 4184 log.go:181] (0x2894ee0) (5) Data frame handling\nI0921 11:59:11.184032 4184 log.go:181] (0x2894ee0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 11:59:11.211447 4184 log.go:181] (0x25a4230) Data frame received for 5\nI0921 11:59:11.211637 4184 log.go:181] (0x2894ee0) (5) Data frame handling\nI0921 11:59:11.211766 4184 log.go:181] (0x25a4230) Data frame received for 3\nI0921 11:59:11.211915 4184 log.go:181] (0x247c770) (3) Data frame handling\nI0921 11:59:11.212048 4184 log.go:181] (0x247c770) (3) Data frame sent\nI0921 11:59:11.212224 4184 log.go:181] (0x25a4230) Data frame received for 3\nI0921 11:59:11.212337 4184 log.go:181] (0x247c770) (3) Data frame handling\nI0921 11:59:11.213484 4184 log.go:181] (0x25a4230) Data frame received for 1\nI0921 11:59:11.213656 4184 log.go:181] (0x25a42a0) (1) Data frame handling\nI0921 11:59:11.213808 4184 log.go:181] (0x25a42a0) (1) Data frame sent\nI0921 11:59:11.214720 4184 log.go:181] (0x25a4230) (0x25a42a0) Stream removed, broadcasting: 1\nI0921 11:59:11.217614 4184 log.go:181] (0x25a4230) Go away received\nI0921 11:59:11.221497 4184 log.go:181] (0x25a4230) (0x25a42a0) Stream removed, broadcasting: 1\nI0921 11:59:11.221711 4184 log.go:181] (0x25a4230) (0x247c770) Stream removed, broadcasting: 3\nI0921 11:59:11.221890 4184 log.go:181] (0x25a4230) (0x2894ee0) Stream removed, broadcasting: 5\n" Sep 21 11:59:11.231: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 11:59:11.232: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 11:59:11.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 21 11:59:12.735: INFO: stderr: "I0921 11:59:12.586273 4205 log.go:181] (0x2e5ff10) (0x2e5ff80) Create stream\nI0921 11:59:12.591106 4205 log.go:181] (0x2e5ff10) (0x2e5ff80) Stream added, broadcasting: 1\nI0921 11:59:12.601276 4205 log.go:181] (0x2e5ff10) Reply frame received for 1\nI0921 11:59:12.602092 4205 log.go:181] (0x2e5ff10) (0x30ba070) Create stream\nI0921 11:59:12.602191 4205 log.go:181] (0x2e5ff10) (0x30ba070) Stream added, broadcasting: 3\nI0921 11:59:12.604242 4205 log.go:181] (0x2e5ff10) Reply frame received for 3\nI0921 11:59:12.604795 4205 log.go:181] (0x2e5ff10) (0x2938150) Create stream\nI0921 11:59:12.604957 4205 log.go:181] (0x2e5ff10) (0x2938150) Stream added, broadcasting: 5\nI0921 11:59:12.606844 4205 log.go:181] (0x2e5ff10) Reply frame received for 5\nI0921 11:59:12.691130 4205 log.go:181] (0x2e5ff10) Data frame received for 5\nI0921 11:59:12.691384 4205 log.go:181] (0x2938150) (5) Data frame handling\nI0921 11:59:12.691809 4205 log.go:181] (0x2938150) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0921 11:59:12.716712 4205 log.go:181] (0x2e5ff10) Data frame received for 3\nI0921 11:59:12.716934 4205 log.go:181] (0x30ba070) (3) Data frame handling\nI0921 11:59:12.717055 4205 log.go:181] (0x2e5ff10) Data frame received for 5\nI0921 11:59:12.717215 4205 log.go:181] (0x2938150) (5) Data frame handling\nI0921 11:59:12.717329 4205 log.go:181] (0x30ba070) (3) Data frame sent\nI0921 11:59:12.717478 4205 log.go:181] (0x2e5ff10) Data frame received for 3\nI0921 11:59:12.717587 4205 log.go:181] (0x30ba070) (3) Data frame handling\nI0921 11:59:12.718118 4205 log.go:181] (0x2e5ff10) Data frame received for 1\nI0921 11:59:12.718298 4205 log.go:181] (0x2e5ff80) (1) Data frame handling\nI0921 11:59:12.718477 4205 log.go:181] (0x2e5ff80) (1) Data frame sent\nI0921 11:59:12.719780 4205 log.go:181] (0x2e5ff10) (0x2e5ff80) Stream removed, broadcasting: 1\nI0921 11:59:12.722544 4205 log.go:181] (0x2e5ff10) Go away received\nI0921 11:59:12.724959 4205 log.go:181] (0x2e5ff10) (0x2e5ff80) Stream removed, broadcasting: 1\nI0921 11:59:12.725257 4205 log.go:181] (0x2e5ff10) (0x30ba070) Stream removed, broadcasting: 3\nI0921 11:59:12.725464 4205 log.go:181] (0x2e5ff10) (0x2938150) Stream removed, broadcasting: 5\n" Sep 21 11:59:12.736: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 21 11:59:12.737: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 21 11:59:12.737: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 11:59:12.745: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 21 11:59:22.764: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 21 11:59:22.765: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 21 11:59:22.765: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 21 11:59:22.802: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:22.803: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC }] Sep 21 11:59:22.803: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:22.804: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:22.804: INFO: Sep 21 11:59:22.804: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 21 11:59:23.814: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:23.814: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC }] Sep 21 11:59:23.814: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:23.815: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:23.815: INFO: Sep 21 11:59:23.815: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 21 11:59:24.866: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:24.866: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:31 +0000 UTC }] Sep 21 11:59:24.867: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:24.868: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:24.868: INFO: Sep 21 11:59:24.868: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 21 11:59:25.879: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:25.879: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:25.879: INFO: Sep 21 11:59:25.880: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 21 11:59:26.888: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:26.888: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:26.888: INFO: Sep 21 11:59:26.888: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 21 11:59:27.895: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:27.895: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:27.895: INFO: Sep 21 11:59:27.895: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 21 11:59:28.904: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:28.904: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:28.905: INFO: Sep 21 11:59:28.905: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 21 11:59:29.914: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:29.914: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:29.915: INFO: Sep 21 11:59:29.915: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 21 11:59:30.924: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:30.925: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:30.925: INFO: Sep 21 11:59:30.925: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 21 11:59:31.937: INFO: POD NODE PHASE GRACE CONDITIONS Sep 21 11:59:31.937: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:59:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-21 11:58:53 +0000 UTC }] Sep 21 11:59:31.937: INFO: Sep 21 11:59:31.938: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7771 Sep 21 11:59:32.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:59:34.210: INFO: rc: 1 Sep 21 11:59:34.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 11:59:44.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:59:45.412: INFO: rc: 1 Sep 21 11:59:45.413: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 11:59:55.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 11:59:56.640: INFO: rc: 1 Sep 21 11:59:56.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:00:06.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:00:07.871: INFO: rc: 1 Sep 21 12:00:07.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:00:17.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:00:19.082: INFO: rc: 1 Sep 21 12:00:19.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:00:29.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:00:30.369: INFO: rc: 1 Sep 21 12:00:30.370: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:00:40.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:00:41.551: INFO: rc: 1 Sep 21 12:00:41.552: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:00:51.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:00:52.770: INFO: rc: 1 Sep 21 12:00:52.770: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:01:02.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:01:03.933: INFO: rc: 1 Sep 21 12:01:03.934: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:01:13.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:01:15.143: INFO: rc: 1 Sep 21 12:01:15.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:01:25.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:01:26.404: INFO: rc: 1 Sep 21 12:01:26.404: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:01:36.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:01:37.598: INFO: rc: 1 Sep 21 12:01:37.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:01:47.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:01:49.272: INFO: rc: 1 Sep 21 12:01:49.272: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:01:59.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:02:03.368: INFO: rc: 1 Sep 21 12:02:03.368: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:02:13.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:02:14.607: INFO: rc: 1 Sep 21 12:02:14.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:02:24.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:02:25.840: INFO: rc: 1 Sep 21 12:02:25.840: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:02:35.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:02:37.023: INFO: rc: 1 Sep 21 12:02:37.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:02:47.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:02:48.253: INFO: rc: 1 Sep 21 12:02:48.254: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:02:58.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:02:59.509: INFO: rc: 1 Sep 21 12:02:59.510: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:03:09.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:03:10.737: INFO: rc: 1 Sep 21 12:03:10.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:03:20.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:03:21.954: INFO: rc: 1 Sep 21 12:03:21.954: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:03:31.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:03:33.242: INFO: rc: 1 Sep 21 12:03:33.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:03:43.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:03:44.442: INFO: rc: 1 Sep 21 12:03:44.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:03:54.443: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:03:55.622: INFO: rc: 1 Sep 21 12:03:55.622: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:04:05.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:04:06.864: INFO: rc: 1 Sep 21 12:04:06.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:04:16.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:04:18.142: INFO: rc: 1 Sep 21 12:04:18.142: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:04:28.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:04:29.381: INFO: rc: 1 Sep 21 12:04:29.381: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Sep 21 12:04:39.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7771 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 21 12:04:40.692: INFO: rc: 1 Sep 21 12:04:40.693: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Sep 21 12:04:40.693: INFO: Scaling statefulset ss to 0 Sep 21 12:04:40.708: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 21 12:04:40.713: INFO: Deleting all statefulset in ns statefulset-7771 Sep 21 12:04:40.718: INFO: Scaling statefulset ss to 0 Sep 21 12:04:40.732: INFO: Waiting for statefulset status.replicas updated to 0 Sep 21 12:04:40.736: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:04:40.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7771" for this suite. • [SLOW TEST:369.609 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":263,"skipped":4236,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:04:40.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 21 12:04:49.016: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1860 PodName:pod-sharedvolume-638e92bc-681a-4296-b583-fde3d4065c43 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:04:49.016: INFO: >>> kubeConfig: /root/.kube/config I0921 12:04:49.126818 10 log.go:181] (0x85772d0) (0x8577490) Create stream I0921 12:04:49.126984 10 log.go:181] (0x85772d0) (0x8577490) Stream added, broadcasting: 1 I0921 12:04:49.131440 10 log.go:181] (0x85772d0) Reply frame received for 1 I0921 12:04:49.131742 10 log.go:181] (0x85772d0) (0x8577d50) Create stream I0921 12:04:49.131893 10 log.go:181] (0x85772d0) (0x8577d50) Stream added, broadcasting: 3 I0921 12:04:49.134121 10 log.go:181] (0x85772d0) Reply frame received for 3 I0921 12:04:49.134307 10 log.go:181] (0x85772d0) (0xa6625b0) Create stream I0921 12:04:49.134400 10 log.go:181] (0x85772d0) (0xa6625b0) Stream added, broadcasting: 5 I0921 12:04:49.136102 10 log.go:181] (0x85772d0) Reply frame received for 5 I0921 12:04:49.227635 10 log.go:181] (0x85772d0) Data frame received for 5 I0921 12:04:49.227883 10 log.go:181] (0xa6625b0) (5) Data frame handling I0921 12:04:49.228239 10 log.go:181] (0x85772d0) Data frame received for 3 I0921 12:04:49.228449 10 log.go:181] (0x8577d50) (3) Data frame handling I0921 12:04:49.228624 10 log.go:181] (0x8577d50) (3) Data frame sent I0921 12:04:49.228754 10 log.go:181] (0x85772d0) Data frame received for 3 I0921 12:04:49.228918 10 log.go:181] (0x8577d50) (3) Data frame handling I0921 12:04:49.229159 10 log.go:181] (0x85772d0) Data frame received for 1 I0921 12:04:49.229360 10 log.go:181] (0x8577490) (1) Data frame handling I0921 12:04:49.229591 10 log.go:181] (0x8577490) (1) Data frame sent I0921 12:04:49.229792 10 log.go:181] (0x85772d0) (0x8577490) Stream removed, broadcasting: 1 I0921 12:04:49.230036 10 log.go:181] (0x85772d0) Go away received I0921 12:04:49.230882 10 log.go:181] (0x85772d0) (0x8577490) Stream removed, broadcasting: 1 I0921 12:04:49.231109 10 log.go:181] (0x85772d0) (0x8577d50) Stream removed, broadcasting: 3 I0921 12:04:49.231267 10 log.go:181] (0x85772d0) (0xa6625b0) Stream removed, broadcasting: 5 Sep 21 12:04:49.231: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:04:49.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1860" for this suite. • [SLOW TEST:8.471 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":264,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:04:49.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 21 12:04:49.336: INFO: Waiting up to 5m0s for pod "pod-e0319e6c-07d0-4aba-8e4c-c240275a8360" in namespace "emptydir-9071" to be "Succeeded or Failed" Sep 21 12:04:49.350: INFO: Pod "pod-e0319e6c-07d0-4aba-8e4c-c240275a8360": Phase="Pending", Reason="", readiness=false. Elapsed: 14.468754ms Sep 21 12:04:51.359: INFO: Pod "pod-e0319e6c-07d0-4aba-8e4c-c240275a8360": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023041186s Sep 21 12:04:53.368: INFO: Pod "pod-e0319e6c-07d0-4aba-8e4c-c240275a8360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031792722s STEP: Saw pod success Sep 21 12:04:53.368: INFO: Pod "pod-e0319e6c-07d0-4aba-8e4c-c240275a8360" satisfied condition "Succeeded or Failed" Sep 21 12:04:53.374: INFO: Trying to get logs from node kali-worker pod pod-e0319e6c-07d0-4aba-8e4c-c240275a8360 container test-container: STEP: delete the pod Sep 21 12:04:53.417: INFO: Waiting for pod pod-e0319e6c-07d0-4aba-8e4c-c240275a8360 to disappear Sep 21 12:04:53.433: INFO: Pod pod-e0319e6c-07d0-4aba-8e4c-c240275a8360 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:04:53.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9071" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4295,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:04:53.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-de04f425-3ee1-4bea-8c45-55fc0e45e94d STEP: Creating a pod to test consume configMaps Sep 21 12:04:53.552: INFO: Waiting up to 5m0s for pod "pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484" in namespace "configmap-333" to be "Succeeded or Failed" Sep 21 12:04:53.567: INFO: Pod "pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484": Phase="Pending", Reason="", readiness=false. Elapsed: 15.233501ms Sep 21 12:04:55.576: INFO: Pod "pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024067674s Sep 21 12:04:57.585: INFO: Pod "pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033206877s STEP: Saw pod success Sep 21 12:04:57.586: INFO: Pod "pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484" satisfied condition "Succeeded or Failed" Sep 21 12:04:57.591: INFO: Trying to get logs from node kali-worker pod pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484 container configmap-volume-test: STEP: delete the pod Sep 21 12:04:57.637: INFO: Waiting for pod pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484 to disappear Sep 21 12:04:57.659: INFO: Pod pod-configmaps-f569d1e0-2009-436c-989f-3fc2dd776484 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:04:57.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-333" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4311,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:04:57.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:05:01.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4129" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4333,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:05:01.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0921 12:05:02.832395 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 21 12:06:04.860: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:06:04.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4633" for this suite. • [SLOW TEST:62.977 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":268,"skipped":4333,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:06:04.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-6064b7f0-8bde-4dcf-b1c7-b3bd273d9723 in namespace container-probe-412 Sep 21 12:06:09.050: INFO: Started pod busybox-6064b7f0-8bde-4dcf-b1c7-b3bd273d9723 in namespace container-probe-412 STEP: checking the pod's current state and verifying that restartCount is present Sep 21 12:06:09.057: INFO: Initial restart count of pod busybox-6064b7f0-8bde-4dcf-b1c7-b3bd273d9723 is 0 Sep 21 12:07:01.282: INFO: Restart count of pod container-probe-412/busybox-6064b7f0-8bde-4dcf-b1c7-b3bd273d9723 is now 1 (52.225040268s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:07:01.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-412" for this suite. • [SLOW TEST:56.458 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4340,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:07:01.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 21 12:07:01.434: INFO: >>> kubeConfig: /root/.kube/config Sep 21 12:07:12.135: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:08:24.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1205" for this suite. • [SLOW TEST:83.177 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":270,"skipped":4349,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:08:24.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-11d32c94-d93b-4585-8079-34299c9f8dc8 STEP: Creating a pod to test consume secrets Sep 21 12:08:24.633: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd" in namespace "projected-3869" to be "Succeeded or Failed" Sep 21 12:08:24.648: INFO: Pod "pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.002729ms Sep 21 12:08:26.655: INFO: Pod "pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022078356s Sep 21 12:08:28.664: INFO: Pod "pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031346598s STEP: Saw pod success Sep 21 12:08:28.665: INFO: Pod "pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd" satisfied condition "Succeeded or Failed" Sep 21 12:08:28.670: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd container projected-secret-volume-test: STEP: delete the pod Sep 21 12:08:28.719: INFO: Waiting for pod pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd to disappear Sep 21 12:08:28.732: INFO: Pod pod-projected-secrets-56f5b0ac-e14a-4020-badb-722c7d89e7fd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:08:28.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3869" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4351,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:08:28.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 21 12:08:28.820: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 21 12:09:51.223: INFO: >>> kubeConfig: /root/.kube/config Sep 21 12:10:11.392: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:11:33.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1430" for this suite. • [SLOW TEST:185.035 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":272,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:11:33.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:11:33.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3913" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":273,"skipped":4378,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:11:33.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:11:39.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7083" for this suite. • [SLOW TEST:5.230 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":274,"skipped":4392,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:11:39.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 21 12:11:39.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config cluster-info' Sep 21 12:11:40.453: INFO: stderr: "" Sep 21 12:11:40.453: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:46255\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:46255/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:11:40.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9200" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":275,"skipped":4394,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:11:40.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-069dabe7-464b-418c-9ba9-3c6c318bfa27 STEP: Creating a pod to test consume configMaps Sep 21 12:11:40.599: INFO: Waiting up to 5m0s for pod "pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f" in namespace "configmap-6616" to be "Succeeded or Failed" Sep 21 12:11:40.632: INFO: Pod "pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.382415ms Sep 21 12:11:42.640: INFO: Pod "pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041600867s Sep 21 12:11:44.649: INFO: Pod "pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050421246s Sep 21 12:11:46.658: INFO: Pod "pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059746285s STEP: Saw pod success Sep 21 12:11:46.659: INFO: Pod "pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f" satisfied condition "Succeeded or Failed" Sep 21 12:11:46.665: INFO: Trying to get logs from node kali-worker pod pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f container configmap-volume-test: STEP: delete the pod Sep 21 12:11:46.718: INFO: Waiting for pod pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f to disappear Sep 21 12:11:46.755: INFO: Pod pod-configmaps-8fa5fd4f-94fc-4267-9763-e362ebab283f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:11:46.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6616" for this suite. • [SLOW TEST:6.302 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":276,"skipped":4403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:11:46.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:12:18.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2879" for this suite. STEP: Destroying namespace "nsdeletetest-6227" for this suite. Sep 21 12:12:18.049: INFO: Namespace nsdeletetest-6227 was already deleted STEP: Destroying namespace "nsdeletetest-5861" for this suite. • [SLOW TEST:31.276 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":277,"skipped":4436,"failed":0} [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:12:18.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 12:14:18.316: INFO: Deleting pod "var-expansion-b5544c79-12bc-4d82-a059-12f16e5976ea" in namespace "var-expansion-6214" Sep 21 12:14:18.322: INFO: Wait up to 5m0s for pod "var-expansion-b5544c79-12bc-4d82-a059-12f16e5976ea" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:14:20.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6214" for this suite. • [SLOW TEST:122.605 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":278,"skipped":4436,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:14:20.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 12:14:31.078: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 12:14:33.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287271, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287271, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287271, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287271, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 12:14:36.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:14:36.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6053" for this suite. STEP: Destroying namespace "webhook-6053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.746 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":279,"skipped":4452,"failed":0} S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:14:36.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 21 12:14:36.508: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:14:53.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7522" for this suite. • [SLOW TEST:16.828 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4453,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:14:53.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 12:14:53.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819" in namespace "downward-api-3608" to be "Succeeded or Failed" Sep 21 12:14:53.366: INFO: Pod "downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819": Phase="Pending", Reason="", readiness=false. Elapsed: 34.616259ms Sep 21 12:14:55.489: INFO: Pod "downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15767007s Sep 21 12:14:57.496: INFO: Pod "downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819": Phase="Running", Reason="", readiness=true. Elapsed: 4.165320048s Sep 21 12:14:59.505: INFO: Pod "downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173780762s STEP: Saw pod success Sep 21 12:14:59.505: INFO: Pod "downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819" satisfied condition "Succeeded or Failed" Sep 21 12:14:59.510: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819 container client-container: STEP: delete the pod Sep 21 12:14:59.547: INFO: Waiting for pod downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819 to disappear Sep 21 12:14:59.552: INFO: Pod downwardapi-volume-f7ac1a72-fa1d-4607-96bf-6c85e8e33819 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:14:59.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3608" for this suite. • [SLOW TEST:6.331 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4460,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:14:59.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-d204f4d7-9979-4bd4-a5cd-3006c7415c85 in namespace container-probe-3000 Sep 21 12:15:03.728: INFO: Started pod liveness-d204f4d7-9979-4bd4-a5cd-3006c7415c85 in namespace container-probe-3000 STEP: checking the pod's current state and verifying that restartCount is present Sep 21 12:15:03.731: INFO: Initial restart count of pod liveness-d204f4d7-9979-4bd4-a5cd-3006c7415c85 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:04.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3000" for this suite. • [SLOW TEST:245.311 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:04.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 21 12:19:05.378: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6438 /api/v1/namespaces/watch-6438/configmaps/e2e-watch-test-label-changed 4482c4c0-ce49-4f1c-b212-2eaa215e4107 2082057 0 2020-09-21 12:19:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-21 12:19:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 12:19:05.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6438 /api/v1/namespaces/watch-6438/configmaps/e2e-watch-test-label-changed 4482c4c0-ce49-4f1c-b212-2eaa215e4107 2082058 0 2020-09-21 12:19:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-21 12:19:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 12:19:05.382: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6438 /api/v1/namespaces/watch-6438/configmaps/e2e-watch-test-label-changed 4482c4c0-ce49-4f1c-b212-2eaa215e4107 2082059 0 2020-09-21 12:19:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-21 12:19:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 21 12:19:15.421: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6438 /api/v1/namespaces/watch-6438/configmaps/e2e-watch-test-label-changed 4482c4c0-ce49-4f1c-b212-2eaa215e4107 2082100 0 2020-09-21 12:19:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-21 12:19:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 12:19:15.423: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6438 /api/v1/namespaces/watch-6438/configmaps/e2e-watch-test-label-changed 4482c4c0-ce49-4f1c-b212-2eaa215e4107 2082101 0 2020-09-21 12:19:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-21 12:19:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 21 12:19:15.424: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6438 /api/v1/namespaces/watch-6438/configmaps/e2e-watch-test-label-changed 4482c4c0-ce49-4f1c-b212-2eaa215e4107 2082102 0 2020-09-21 12:19:04 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-21 12:19:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:15.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6438" for this suite. • [SLOW TEST:10.554 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":283,"skipped":4501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:15.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 21 12:19:15.557: INFO: Waiting up to 5m0s for pod "pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45" in namespace "emptydir-415" to be "Succeeded or Failed" Sep 21 12:19:15.581: INFO: Pod "pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45": Phase="Pending", Reason="", readiness=false. Elapsed: 23.080005ms Sep 21 12:19:17.589: INFO: Pod "pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031688436s Sep 21 12:19:19.598: INFO: Pod "pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040363378s STEP: Saw pod success Sep 21 12:19:19.598: INFO: Pod "pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45" satisfied condition "Succeeded or Failed" Sep 21 12:19:19.604: INFO: Trying to get logs from node kali-worker pod pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45 container test-container: STEP: delete the pod Sep 21 12:19:19.647: INFO: Waiting for pod pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45 to disappear Sep 21 12:19:19.651: INFO: Pod pod-37cf67ce-07d5-4ad6-96a4-3632a5e80f45 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:19.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-415" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4538,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:19.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 21 12:19:19.766: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 21 12:19:19.801: INFO: Waiting for terminating namespaces to be deleted... Sep 21 12:19:19.807: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 21 12:19:19.816: INFO: kindnet-jk7qk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 12:19:19.816: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 12:19:19.816: INFO: kube-proxy-kz8hk from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 12:19:19.816: INFO: Container kube-proxy ready: true, restart count 0 Sep 21 12:19:19.816: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 21 12:19:19.824: INFO: kindnet-r64bh from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 12:19:19.825: INFO: Container kindnet-cni ready: true, restart count 0 Sep 21 12:19:19.825: INFO: kube-proxy-rnv9w from kube-system started at 2020-09-13 16:57:34 +0000 UTC (1 container statuses recorded) Sep 21 12:19:19.825: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1636cbc3e9161b91], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1636cbc3eae9cac6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:20.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-35" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":285,"skipped":4550,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:20.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-fe323da9-e08b-43c7-b347-3059652c6a76 STEP: Creating a pod to test consume secrets Sep 21 12:19:20.960: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd" in namespace "projected-2921" to be "Succeeded or Failed" Sep 21 12:19:20.984: INFO: Pod "pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.911811ms Sep 21 12:19:22.995: INFO: Pod "pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034701651s Sep 21 12:19:25.001: INFO: Pod "pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041309391s STEP: Saw pod success Sep 21 12:19:25.001: INFO: Pod "pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd" satisfied condition "Succeeded or Failed" Sep 21 12:19:25.005: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd container projected-secret-volume-test: STEP: delete the pod Sep 21 12:19:25.019: INFO: Waiting for pod pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd to disappear Sep 21 12:19:25.024: INFO: Pod pod-projected-secrets-7ce342a0-e3a3-46f2-a322-f773aef691cd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:25.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2921" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4553,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:25.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2 Sep 21 12:19:25.173: INFO: Pod name my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2: Found 0 pods out of 1 Sep 21 12:19:30.182: INFO: Pod name my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2: Found 1 pods out of 1 Sep 21 12:19:30.183: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2" are running Sep 21 12:19:30.188: INFO: Pod "my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2-mx4sp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 12:19:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 12:19:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 12:19:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-21 12:19:25 +0000 UTC Reason: Message:}]) Sep 21 12:19:30.190: INFO: Trying to dial the pod Sep 21 12:19:35.209: INFO: Controller my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2: Got expected result from replica 1 [my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2-mx4sp]: "my-hostname-basic-8d378af8-3a53-4d55-9076-d4ec27bcc4e2-mx4sp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:35.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-868" for this suite. • [SLOW TEST:10.189 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":287,"skipped":4574,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:35.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5625/configmap-test-5a274ad4-e222-45f7-b061-35f8408060bf STEP: Creating a pod to test consume configMaps Sep 21 12:19:35.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b" in namespace "configmap-5625" to be "Succeeded or Failed" Sep 21 12:19:35.363: INFO: Pod "pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.454828ms Sep 21 12:19:37.370: INFO: Pod "pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026855535s Sep 21 12:19:39.378: INFO: Pod "pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0348769s STEP: Saw pod success Sep 21 12:19:39.379: INFO: Pod "pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b" satisfied condition "Succeeded or Failed" Sep 21 12:19:39.384: INFO: Trying to get logs from node kali-worker pod pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b container env-test: STEP: delete the pod Sep 21 12:19:39.430: INFO: Waiting for pod pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b to disappear Sep 21 12:19:39.442: INFO: Pod pod-configmaps-6d6fa988-1b5a-4c47-8c7b-acb2b5a77c7b no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:39.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5625" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4578,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:39.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:39.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-802" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":289,"skipped":4587,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:39.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 21 12:19:47.774: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 21 12:19:47.827: INFO: Pod pod-with-prestop-exec-hook still exists Sep 21 12:19:49.828: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 21 12:19:49.837: INFO: Pod pod-with-prestop-exec-hook still exists Sep 21 12:19:51.828: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 21 12:19:51.837: INFO: Pod pod-with-prestop-exec-hook still exists Sep 21 12:19:53.828: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 21 12:19:53.835: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:19:53.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8442" for this suite. • [SLOW TEST:14.288 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4588,"failed":0} SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:19:53.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 21 12:20:04.051: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:04.052: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:04.162027 10 log.go:181] (0xad16230) (0xad16310) Create stream I0921 12:20:04.162225 10 log.go:181] (0xad16230) (0xad16310) Stream added, broadcasting: 1 I0921 12:20:04.166718 10 log.go:181] (0xad16230) Reply frame received for 1 I0921 12:20:04.166974 10 log.go:181] (0xad16230) (0x6e8d3b0) Create stream I0921 12:20:04.167105 10 log.go:181] (0xad16230) (0x6e8d3b0) Stream added, broadcasting: 3 I0921 12:20:04.168942 10 log.go:181] (0xad16230) Reply frame received for 3 I0921 12:20:04.169082 10 log.go:181] (0xad16230) (0xad16690) Create stream I0921 12:20:04.169163 10 log.go:181] (0xad16230) (0xad16690) Stream added, broadcasting: 5 I0921 12:20:04.170639 10 log.go:181] (0xad16230) Reply frame received for 5 I0921 12:20:04.246552 10 log.go:181] (0xad16230) Data frame received for 3 I0921 12:20:04.246882 10 log.go:181] (0x6e8d3b0) (3) Data frame handling I0921 12:20:04.247152 10 log.go:181] (0xad16230) Data frame received for 5 I0921 12:20:04.247544 10 log.go:181] (0xad16690) (5) Data frame handling I0921 12:20:04.247782 10 log.go:181] (0x6e8d3b0) (3) Data frame sent I0921 12:20:04.247914 10 log.go:181] (0xad16230) Data frame received for 3 I0921 12:20:04.248007 10 log.go:181] (0x6e8d3b0) (3) Data frame handling I0921 12:20:04.249084 10 log.go:181] (0xad16230) Data frame received for 1 I0921 12:20:04.249205 10 log.go:181] (0xad16310) (1) Data frame handling I0921 12:20:04.249325 10 log.go:181] (0xad16310) (1) Data frame sent I0921 12:20:04.249432 10 log.go:181] (0xad16230) (0xad16310) Stream removed, broadcasting: 1 I0921 12:20:04.249689 10 log.go:181] (0xad16230) Go away received I0921 12:20:04.250039 10 log.go:181] (0xad16230) (0xad16310) Stream removed, broadcasting: 1 I0921 12:20:04.250299 10 log.go:181] (0xad16230) (0x6e8d3b0) Stream removed, broadcasting: 3 I0921 12:20:04.250550 10 log.go:181] (0xad16230) (0xad16690) Stream removed, broadcasting: 5 Sep 21 12:20:04.250: INFO: Exec stderr: "" Sep 21 12:20:04.251: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:04.251: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:04.351031 10 log.go:181] (0xad17420) (0xad175e0) Create stream I0921 12:20:04.351210 10 log.go:181] (0xad17420) (0xad175e0) Stream added, broadcasting: 1 I0921 12:20:04.355287 10 log.go:181] (0xad17420) Reply frame received for 1 I0921 12:20:04.355557 10 log.go:181] (0xad17420) (0xc39d500) Create stream I0921 12:20:04.355742 10 log.go:181] (0xad17420) (0xc39d500) Stream added, broadcasting: 3 I0921 12:20:04.357676 10 log.go:181] (0xad17420) Reply frame received for 3 I0921 12:20:04.357830 10 log.go:181] (0xad17420) (0xc39d6c0) Create stream I0921 12:20:04.357919 10 log.go:181] (0xad17420) (0xc39d6c0) Stream added, broadcasting: 5 I0921 12:20:04.359573 10 log.go:181] (0xad17420) Reply frame received for 5 I0921 12:20:04.427759 10 log.go:181] (0xad17420) Data frame received for 3 I0921 12:20:04.427953 10 log.go:181] (0xc39d500) (3) Data frame handling I0921 12:20:04.428084 10 log.go:181] (0xad17420) Data frame received for 5 I0921 12:20:04.428400 10 log.go:181] (0xc39d6c0) (5) Data frame handling I0921 12:20:04.428556 10 log.go:181] (0xc39d500) (3) Data frame sent I0921 12:20:04.428712 10 log.go:181] (0xad17420) Data frame received for 3 I0921 12:20:04.428849 10 log.go:181] (0xc39d500) (3) Data frame handling I0921 12:20:04.428953 10 log.go:181] (0xad17420) Data frame received for 1 I0921 12:20:04.429073 10 log.go:181] (0xad175e0) (1) Data frame handling I0921 12:20:04.429188 10 log.go:181] (0xad175e0) (1) Data frame sent I0921 12:20:04.429339 10 log.go:181] (0xad17420) (0xad175e0) Stream removed, broadcasting: 1 I0921 12:20:04.429513 10 log.go:181] (0xad17420) Go away received I0921 12:20:04.429831 10 log.go:181] (0xad17420) (0xad175e0) Stream removed, broadcasting: 1 I0921 12:20:04.429953 10 log.go:181] (0xad17420) (0xc39d500) Stream removed, broadcasting: 3 I0921 12:20:04.430054 10 log.go:181] (0xad17420) (0xc39d6c0) Stream removed, broadcasting: 5 Sep 21 12:20:04.430: INFO: Exec stderr: "" Sep 21 12:20:04.430: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:04.430: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:04.534393 10 log.go:181] (0xac975e0) (0x723d500) Create stream I0921 12:20:04.534523 10 log.go:181] (0xac975e0) (0x723d500) Stream added, broadcasting: 1 I0921 12:20:04.538709 10 log.go:181] (0xac975e0) Reply frame received for 1 I0921 12:20:04.538980 10 log.go:181] (0xac975e0) (0xc39dab0) Create stream I0921 12:20:04.539098 10 log.go:181] (0xac975e0) (0xc39dab0) Stream added, broadcasting: 3 I0921 12:20:04.541055 10 log.go:181] (0xac975e0) Reply frame received for 3 I0921 12:20:04.541307 10 log.go:181] (0xac975e0) (0xc39dc70) Create stream I0921 12:20:04.541443 10 log.go:181] (0xac975e0) (0xc39dc70) Stream added, broadcasting: 5 I0921 12:20:04.543487 10 log.go:181] (0xac975e0) Reply frame received for 5 I0921 12:20:04.610756 10 log.go:181] (0xac975e0) Data frame received for 5 I0921 12:20:04.611014 10 log.go:181] (0xc39dc70) (5) Data frame handling I0921 12:20:04.611168 10 log.go:181] (0xac975e0) Data frame received for 3 I0921 12:20:04.611347 10 log.go:181] (0xc39dab0) (3) Data frame handling I0921 12:20:04.611541 10 log.go:181] (0xc39dab0) (3) Data frame sent I0921 12:20:04.611795 10 log.go:181] (0xac975e0) Data frame received for 3 I0921 12:20:04.611963 10 log.go:181] (0xc39dab0) (3) Data frame handling I0921 12:20:04.612362 10 log.go:181] (0xac975e0) Data frame received for 1 I0921 12:20:04.612549 10 log.go:181] (0x723d500) (1) Data frame handling I0921 12:20:04.612773 10 log.go:181] (0x723d500) (1) Data frame sent I0921 12:20:04.612970 10 log.go:181] (0xac975e0) (0x723d500) Stream removed, broadcasting: 1 I0921 12:20:04.613166 10 log.go:181] (0xac975e0) Go away received I0921 12:20:04.613641 10 log.go:181] (0xac975e0) (0x723d500) Stream removed, broadcasting: 1 I0921 12:20:04.613814 10 log.go:181] (0xac975e0) (0xc39dab0) Stream removed, broadcasting: 3 I0921 12:20:04.613963 10 log.go:181] (0xac975e0) (0xc39dc70) Stream removed, broadcasting: 5 Sep 21 12:20:04.614: INFO: Exec stderr: "" Sep 21 12:20:04.614: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:04.614: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:04.719051 10 log.go:181] (0x6e8ddc0) (0x6e8de30) Create stream I0921 12:20:04.719195 10 log.go:181] (0x6e8ddc0) (0x6e8de30) Stream added, broadcasting: 1 I0921 12:20:04.725240 10 log.go:181] (0x6e8ddc0) Reply frame received for 1 I0921 12:20:04.725542 10 log.go:181] (0x6e8ddc0) (0x8576b60) Create stream I0921 12:20:04.725667 10 log.go:181] (0x6e8ddc0) (0x8576b60) Stream added, broadcasting: 3 I0921 12:20:04.727464 10 log.go:181] (0x6e8ddc0) Reply frame received for 3 I0921 12:20:04.727577 10 log.go:181] (0x6e8ddc0) (0x8577110) Create stream I0921 12:20:04.727634 10 log.go:181] (0x6e8ddc0) (0x8577110) Stream added, broadcasting: 5 I0921 12:20:04.729020 10 log.go:181] (0x6e8ddc0) Reply frame received for 5 I0921 12:20:04.785855 10 log.go:181] (0x6e8ddc0) Data frame received for 3 I0921 12:20:04.786034 10 log.go:181] (0x8576b60) (3) Data frame handling I0921 12:20:04.786164 10 log.go:181] (0x6e8ddc0) Data frame received for 5 I0921 12:20:04.786340 10 log.go:181] (0x8577110) (5) Data frame handling I0921 12:20:04.786474 10 log.go:181] (0x8576b60) (3) Data frame sent I0921 12:20:04.786631 10 log.go:181] (0x6e8ddc0) Data frame received for 3 I0921 12:20:04.786762 10 log.go:181] (0x8576b60) (3) Data frame handling I0921 12:20:04.787849 10 log.go:181] (0x6e8ddc0) Data frame received for 1 I0921 12:20:04.787975 10 log.go:181] (0x6e8de30) (1) Data frame handling I0921 12:20:04.788107 10 log.go:181] (0x6e8de30) (1) Data frame sent I0921 12:20:04.788392 10 log.go:181] (0x6e8ddc0) (0x6e8de30) Stream removed, broadcasting: 1 I0921 12:20:04.788595 10 log.go:181] (0x6e8ddc0) Go away received I0921 12:20:04.789080 10 log.go:181] (0x6e8ddc0) (0x6e8de30) Stream removed, broadcasting: 1 I0921 12:20:04.789258 10 log.go:181] (0x6e8ddc0) (0x8576b60) Stream removed, broadcasting: 3 I0921 12:20:04.789399 10 log.go:181] (0x6e8ddc0) (0x8577110) Stream removed, broadcasting: 5 Sep 21 12:20:04.789: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 21 12:20:04.789: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:04.790: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:04.904816 10 log.go:181] (0xa662e70) (0xa662fc0) Create stream I0921 12:20:04.905043 10 log.go:181] (0xa662e70) (0xa662fc0) Stream added, broadcasting: 1 I0921 12:20:04.910791 10 log.go:181] (0xa662e70) Reply frame received for 1 I0921 12:20:04.911029 10 log.go:181] (0xa662e70) (0x7da45b0) Create stream I0921 12:20:04.911123 10 log.go:181] (0xa662e70) (0x7da45b0) Stream added, broadcasting: 3 I0921 12:20:04.913216 10 log.go:181] (0xa662e70) Reply frame received for 3 I0921 12:20:04.913396 10 log.go:181] (0xa662e70) (0xa663420) Create stream I0921 12:20:04.913481 10 log.go:181] (0xa662e70) (0xa663420) Stream added, broadcasting: 5 I0921 12:20:04.914920 10 log.go:181] (0xa662e70) Reply frame received for 5 I0921 12:20:04.990013 10 log.go:181] (0xa662e70) Data frame received for 3 I0921 12:20:04.990244 10 log.go:181] (0x7da45b0) (3) Data frame handling I0921 12:20:04.990562 10 log.go:181] (0xa662e70) Data frame received for 5 I0921 12:20:04.990740 10 log.go:181] (0xa663420) (5) Data frame handling I0921 12:20:04.990882 10 log.go:181] (0xa662e70) Data frame received for 1 I0921 12:20:04.991052 10 log.go:181] (0xa662fc0) (1) Data frame handling I0921 12:20:04.991243 10 log.go:181] (0x7da45b0) (3) Data frame sent I0921 12:20:04.991564 10 log.go:181] (0xa662e70) Data frame received for 3 I0921 12:20:04.991727 10 log.go:181] (0x7da45b0) (3) Data frame handling I0921 12:20:04.991863 10 log.go:181] (0xa662fc0) (1) Data frame sent I0921 12:20:04.992001 10 log.go:181] (0xa662e70) (0xa662fc0) Stream removed, broadcasting: 1 I0921 12:20:04.992270 10 log.go:181] (0xa662e70) Go away received I0921 12:20:04.992732 10 log.go:181] (0xa662e70) (0xa662fc0) Stream removed, broadcasting: 1 I0921 12:20:04.992850 10 log.go:181] (0xa662e70) (0x7da45b0) Stream removed, broadcasting: 3 I0921 12:20:04.992964 10 log.go:181] (0xa662e70) (0xa663420) Stream removed, broadcasting: 5 Sep 21 12:20:04.993: INFO: Exec stderr: "" Sep 21 12:20:04.993: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:04.993: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:05.114912 10 log.go:181] (0x79bd730) (0x79bd7a0) Create stream I0921 12:20:05.115107 10 log.go:181] (0x79bd730) (0x79bd7a0) Stream added, broadcasting: 1 I0921 12:20:05.120771 10 log.go:181] (0x79bd730) Reply frame received for 1 I0921 12:20:05.121067 10 log.go:181] (0x79bd730) (0x79bd960) Create stream I0921 12:20:05.121195 10 log.go:181] (0x79bd730) (0x79bd960) Stream added, broadcasting: 3 I0921 12:20:05.123229 10 log.go:181] (0x79bd730) Reply frame received for 3 I0921 12:20:05.123485 10 log.go:181] (0x79bd730) (0xa663d50) Create stream I0921 12:20:05.123612 10 log.go:181] (0x79bd730) (0xa663d50) Stream added, broadcasting: 5 I0921 12:20:05.125757 10 log.go:181] (0x79bd730) Reply frame received for 5 I0921 12:20:05.214060 10 log.go:181] (0x79bd730) Data frame received for 3 I0921 12:20:05.214293 10 log.go:181] (0x79bd960) (3) Data frame handling I0921 12:20:05.214474 10 log.go:181] (0x79bd960) (3) Data frame sent I0921 12:20:05.214700 10 log.go:181] (0x79bd730) Data frame received for 3 I0921 12:20:05.214836 10 log.go:181] (0x79bd730) Data frame received for 5 I0921 12:20:05.215041 10 log.go:181] (0xa663d50) (5) Data frame handling I0921 12:20:05.215230 10 log.go:181] (0x79bd960) (3) Data frame handling I0921 12:20:05.215496 10 log.go:181] (0x79bd730) Data frame received for 1 I0921 12:20:05.215621 10 log.go:181] (0x79bd7a0) (1) Data frame handling I0921 12:20:05.215749 10 log.go:181] (0x79bd7a0) (1) Data frame sent I0921 12:20:05.215882 10 log.go:181] (0x79bd730) (0x79bd7a0) Stream removed, broadcasting: 1 I0921 12:20:05.216055 10 log.go:181] (0x79bd730) Go away received I0921 12:20:05.216522 10 log.go:181] (0x79bd730) (0x79bd7a0) Stream removed, broadcasting: 1 I0921 12:20:05.216739 10 log.go:181] (0x79bd730) (0x79bd960) Stream removed, broadcasting: 3 I0921 12:20:05.216919 10 log.go:181] (0x79bd730) (0xa663d50) Stream removed, broadcasting: 5 Sep 21 12:20:05.217: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 21 12:20:05.217: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:05.217: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:05.329531 10 log.go:181] (0x7da5030) (0x7da50a0) Create stream I0921 12:20:05.329719 10 log.go:181] (0x7da5030) (0x7da50a0) Stream added, broadcasting: 1 I0921 12:20:05.334510 10 log.go:181] (0x7da5030) Reply frame received for 1 I0921 12:20:05.334757 10 log.go:181] (0x7da5030) (0x84e09a0) Create stream I0921 12:20:05.334903 10 log.go:181] (0x7da5030) (0x84e09a0) Stream added, broadcasting: 3 I0921 12:20:05.336836 10 log.go:181] (0x7da5030) Reply frame received for 3 I0921 12:20:05.337016 10 log.go:181] (0x7da5030) (0x7da5260) Create stream I0921 12:20:05.337105 10 log.go:181] (0x7da5030) (0x7da5260) Stream added, broadcasting: 5 I0921 12:20:05.338643 10 log.go:181] (0x7da5030) Reply frame received for 5 I0921 12:20:05.398456 10 log.go:181] (0x7da5030) Data frame received for 5 I0921 12:20:05.398666 10 log.go:181] (0x7da5260) (5) Data frame handling I0921 12:20:05.398818 10 log.go:181] (0x7da5030) Data frame received for 3 I0921 12:20:05.398949 10 log.go:181] (0x84e09a0) (3) Data frame handling I0921 12:20:05.399078 10 log.go:181] (0x84e09a0) (3) Data frame sent I0921 12:20:05.399259 10 log.go:181] (0x7da5030) Data frame received for 3 I0921 12:20:05.399422 10 log.go:181] (0x84e09a0) (3) Data frame handling I0921 12:20:05.399773 10 log.go:181] (0x7da5030) Data frame received for 1 I0921 12:20:05.399891 10 log.go:181] (0x7da50a0) (1) Data frame handling I0921 12:20:05.400009 10 log.go:181] (0x7da50a0) (1) Data frame sent I0921 12:20:05.400123 10 log.go:181] (0x7da5030) (0x7da50a0) Stream removed, broadcasting: 1 I0921 12:20:05.400352 10 log.go:181] (0x7da5030) Go away received I0921 12:20:05.400917 10 log.go:181] (0x7da5030) (0x7da50a0) Stream removed, broadcasting: 1 I0921 12:20:05.401068 10 log.go:181] (0x7da5030) (0x84e09a0) Stream removed, broadcasting: 3 I0921 12:20:05.401199 10 log.go:181] (0x7da5030) (0x7da5260) Stream removed, broadcasting: 5 Sep 21 12:20:05.401: INFO: Exec stderr: "" Sep 21 12:20:05.401: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:05.401: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:05.511757 10 log.go:181] (0x801c460) (0x801c4d0) Create stream I0921 12:20:05.511881 10 log.go:181] (0x801c460) (0x801c4d0) Stream added, broadcasting: 1 I0921 12:20:05.516553 10 log.go:181] (0x801c460) Reply frame received for 1 I0921 12:20:05.516834 10 log.go:181] (0x801c460) (0x7da5420) Create stream I0921 12:20:05.516984 10 log.go:181] (0x801c460) (0x7da5420) Stream added, broadcasting: 3 I0921 12:20:05.519027 10 log.go:181] (0x801c460) Reply frame received for 3 I0921 12:20:05.519256 10 log.go:181] (0x801c460) (0x90e5570) Create stream I0921 12:20:05.519368 10 log.go:181] (0x801c460) (0x90e5570) Stream added, broadcasting: 5 I0921 12:20:05.521372 10 log.go:181] (0x801c460) Reply frame received for 5 I0921 12:20:05.580872 10 log.go:181] (0x801c460) Data frame received for 5 I0921 12:20:05.581043 10 log.go:181] (0x90e5570) (5) Data frame handling I0921 12:20:05.581214 10 log.go:181] (0x801c460) Data frame received for 3 I0921 12:20:05.581352 10 log.go:181] (0x7da5420) (3) Data frame handling I0921 12:20:05.581455 10 log.go:181] (0x7da5420) (3) Data frame sent I0921 12:20:05.581529 10 log.go:181] (0x801c460) Data frame received for 3 I0921 12:20:05.581592 10 log.go:181] (0x7da5420) (3) Data frame handling I0921 12:20:05.582028 10 log.go:181] (0x801c460) Data frame received for 1 I0921 12:20:05.582208 10 log.go:181] (0x801c4d0) (1) Data frame handling I0921 12:20:05.582380 10 log.go:181] (0x801c4d0) (1) Data frame sent I0921 12:20:05.582507 10 log.go:181] (0x801c460) (0x801c4d0) Stream removed, broadcasting: 1 I0921 12:20:05.582660 10 log.go:181] (0x801c460) Go away received I0921 12:20:05.583105 10 log.go:181] (0x801c460) (0x801c4d0) Stream removed, broadcasting: 1 I0921 12:20:05.583226 10 log.go:181] (0x801c460) (0x7da5420) Stream removed, broadcasting: 3 I0921 12:20:05.583394 10 log.go:181] (0x801c460) (0x90e5570) Stream removed, broadcasting: 5 Sep 21 12:20:05.583: INFO: Exec stderr: "" Sep 21 12:20:05.583: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:05.583: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:05.692205 10 log.go:181] (0x801c930) (0x801c9a0) Create stream I0921 12:20:05.692342 10 log.go:181] (0x801c930) (0x801c9a0) Stream added, broadcasting: 1 I0921 12:20:05.696102 10 log.go:181] (0x801c930) Reply frame received for 1 I0921 12:20:05.696319 10 log.go:181] (0x801c930) (0x7da56c0) Create stream I0921 12:20:05.696395 10 log.go:181] (0x801c930) (0x7da56c0) Stream added, broadcasting: 3 I0921 12:20:05.697601 10 log.go:181] (0x801c930) Reply frame received for 3 I0921 12:20:05.697723 10 log.go:181] (0x801c930) (0x7da5880) Create stream I0921 12:20:05.697779 10 log.go:181] (0x801c930) (0x7da5880) Stream added, broadcasting: 5 I0921 12:20:05.699311 10 log.go:181] (0x801c930) Reply frame received for 5 I0921 12:20:05.763668 10 log.go:181] (0x801c930) Data frame received for 5 I0921 12:20:05.763911 10 log.go:181] (0x7da5880) (5) Data frame handling I0921 12:20:05.764203 10 log.go:181] (0x801c930) Data frame received for 3 I0921 12:20:05.764302 10 log.go:181] (0x7da56c0) (3) Data frame handling I0921 12:20:05.764398 10 log.go:181] (0x7da56c0) (3) Data frame sent I0921 12:20:05.764462 10 log.go:181] (0x801c930) Data frame received for 3 I0921 12:20:05.764551 10 log.go:181] (0x7da56c0) (3) Data frame handling I0921 12:20:05.764977 10 log.go:181] (0x801c930) Data frame received for 1 I0921 12:20:05.765099 10 log.go:181] (0x801c9a0) (1) Data frame handling I0921 12:20:05.765222 10 log.go:181] (0x801c9a0) (1) Data frame sent I0921 12:20:05.765392 10 log.go:181] (0x801c930) (0x801c9a0) Stream removed, broadcasting: 1 I0921 12:20:05.765579 10 log.go:181] (0x801c930) Go away received I0921 12:20:05.765908 10 log.go:181] (0x801c930) (0x801c9a0) Stream removed, broadcasting: 1 I0921 12:20:05.766053 10 log.go:181] (0x801c930) (0x7da56c0) Stream removed, broadcasting: 3 I0921 12:20:05.766149 10 log.go:181] (0x801c930) (0x7da5880) Stream removed, broadcasting: 5 Sep 21 12:20:05.766: INFO: Exec stderr: "" Sep 21 12:20:05.766: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7301 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:20:05.766: INFO: >>> kubeConfig: /root/.kube/config I0921 12:20:05.864855 10 log.go:181] (0xa00bd50) (0xa00be30) Create stream I0921 12:20:05.864992 10 log.go:181] (0xa00bd50) (0xa00be30) Stream added, broadcasting: 1 I0921 12:20:05.868828 10 log.go:181] (0xa00bd50) Reply frame received for 1 I0921 12:20:05.869119 10 log.go:181] (0xa00bd50) (0x801cc40) Create stream I0921 12:20:05.869249 10 log.go:181] (0xa00bd50) (0x801cc40) Stream added, broadcasting: 3 I0921 12:20:05.870924 10 log.go:181] (0xa00bd50) Reply frame received for 3 I0921 12:20:05.871074 10 log.go:181] (0xa00bd50) (0x7da3260) Create stream I0921 12:20:05.871162 10 log.go:181] (0xa00bd50) (0x7da3260) Stream added, broadcasting: 5 I0921 12:20:05.872722 10 log.go:181] (0xa00bd50) Reply frame received for 5 I0921 12:20:05.950743 10 log.go:181] (0xa00bd50) Data frame received for 3 I0921 12:20:05.951001 10 log.go:181] (0x801cc40) (3) Data frame handling I0921 12:20:05.951137 10 log.go:181] (0xa00bd50) Data frame received for 5 I0921 12:20:05.951291 10 log.go:181] (0x7da3260) (5) Data frame handling I0921 12:20:05.951461 10 log.go:181] (0x801cc40) (3) Data frame sent I0921 12:20:05.951624 10 log.go:181] (0xa00bd50) Data frame received for 3 I0921 12:20:05.951722 10 log.go:181] (0x801cc40) (3) Data frame handling I0921 12:20:05.952041 10 log.go:181] (0xa00bd50) Data frame received for 1 I0921 12:20:05.952380 10 log.go:181] (0xa00be30) (1) Data frame handling I0921 12:20:05.952579 10 log.go:181] (0xa00be30) (1) Data frame sent I0921 12:20:05.952742 10 log.go:181] (0xa00bd50) (0xa00be30) Stream removed, broadcasting: 1 I0921 12:20:05.952947 10 log.go:181] (0xa00bd50) Go away received I0921 12:20:05.953272 10 log.go:181] (0xa00bd50) (0xa00be30) Stream removed, broadcasting: 1 I0921 12:20:05.953415 10 log.go:181] (0xa00bd50) (0x801cc40) Stream removed, broadcasting: 3 I0921 12:20:05.953598 10 log.go:181] (0xa00bd50) (0x7da3260) Stream removed, broadcasting: 5 Sep 21 12:20:05.953: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:20:05.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7301" for this suite. • [SLOW TEST:12.095 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:20:05.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 21 12:20:06.112: INFO: Waiting up to 1m0s for all nodes to be ready Sep 21 12:21:06.186: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 21 12:21:06.228: INFO: Created pod: pod0-sched-preemption-low-priority Sep 21 12:21:06.284: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:21:18.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1929" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:72.543 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":292,"skipped":4637,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:21:18.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 21 12:21:18.646: INFO: PodSpec: initContainers in spec.initContainers Sep 21 12:22:09.313: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0d1c3313-50eb-4f21-a82e-0ff2e3ea85f2", GenerateName:"", Namespace:"init-container-186", SelfLink:"/api/v1/namespaces/init-container-186/pods/pod-init-0d1c3313-50eb-4f21-a82e-0ff2e3ea85f2", UID:"7206de1b-38c0-4f74-8527-c4e671a6a90a", ResourceVersion:"2082958", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736287678, loc:(*time.Location)(0x5d1d160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"645237944"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x91a8500), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x9f1a0f0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x91a85c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x9f1a100)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r285p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x91a86c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r285p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r285p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r285p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xd3b6088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x7d8bc80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xd3b6110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xd3b6130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xd3b6138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xd3b613c), PreemptionPolicy:(*v1.PreemptionPolicy)(0x8138a98), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287678, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287678, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287678, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287678, loc:(*time.Location)(0x5d1d160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.28", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.28"}}, StartTime:(*v1.Time)(0x91a89a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9195b80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9195bd0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d6169dbf8107492f7017a850c5d1f1e81376bac52d862533c2253f283db42931", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x9f1a120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x9f1a110), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xd3b61bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:22:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-186" for this suite. • [SLOW TEST:50.890 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":293,"skipped":4659,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:22:09.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 21 12:22:24.273: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 21 12:22:26.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287744, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287744, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287744, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736287744, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 21 12:22:29.529: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:22:29.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7990" for this suite. STEP: Destroying namespace "webhook-7990-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.295 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":294,"skipped":4665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:22:29.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 21 12:22:29.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844" in namespace "downward-api-1626" to be "Succeeded or Failed" Sep 21 12:22:29.798: INFO: Pod "downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.726122ms Sep 21 12:22:31.840: INFO: Pod "downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047129282s Sep 21 12:22:33.848: INFO: Pod "downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05469374s STEP: Saw pod success Sep 21 12:22:33.848: INFO: Pod "downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844" satisfied condition "Succeeded or Failed" Sep 21 12:22:33.854: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844 container client-container: STEP: delete the pod Sep 21 12:22:33.967: INFO: Waiting for pod downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844 to disappear Sep 21 12:22:33.973: INFO: Pod downwardapi-volume-9dc971c9-33ba-46f8-a3c8-619e87e20844 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:22:33.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1626" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:22:33.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:22:34.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9580" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":296,"skipped":4739,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:22:34.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 21 12:22:34.281: INFO: Waiting up to 5m0s for pod "pod-837160e1-b18a-4e8c-baaf-a957e8983786" in namespace "emptydir-828" to be "Succeeded or Failed" Sep 21 12:22:34.367: INFO: Pod "pod-837160e1-b18a-4e8c-baaf-a957e8983786": Phase="Pending", Reason="", readiness=false. Elapsed: 86.137033ms Sep 21 12:22:36.410: INFO: Pod "pod-837160e1-b18a-4e8c-baaf-a957e8983786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129501034s Sep 21 12:22:38.418: INFO: Pod "pod-837160e1-b18a-4e8c-baaf-a957e8983786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137514498s STEP: Saw pod success Sep 21 12:22:38.418: INFO: Pod "pod-837160e1-b18a-4e8c-baaf-a957e8983786" satisfied condition "Succeeded or Failed" Sep 21 12:22:38.424: INFO: Trying to get logs from node kali-worker2 pod pod-837160e1-b18a-4e8c-baaf-a957e8983786 container test-container: STEP: delete the pod Sep 21 12:22:38.493: INFO: Waiting for pod pod-837160e1-b18a-4e8c-baaf-a957e8983786 to disappear Sep 21 12:22:38.521: INFO: Pod pod-837160e1-b18a-4e8c-baaf-a957e8983786 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:22:38.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-828" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":297,"skipped":4761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:22:38.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 21 12:22:38.820: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:22:46.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3863" for this suite. • [SLOW TEST:8.422 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":298,"skipped":4784,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:22:46.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6877 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 21 12:22:47.037: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 21 12:22:47.416: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 12:22:49.439: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 21 12:22:51.424: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 12:22:53.425: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 12:22:55.440: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 12:22:57.427: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 12:22:59.426: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 21 12:23:01.425: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 21 12:23:01.436: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 21 12:23:03.445: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 21 12:23:05.445: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 21 12:23:07.446: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 21 12:23:11.480: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.5:8080/dial?request=hostname&protocol=udp&host=10.244.1.4&port=8081&tries=1'] Namespace:pod-network-test-6877 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:23:11.480: INFO: >>> kubeConfig: /root/.kube/config I0921 12:23:11.586399 10 log.go:181] (0x8576e70) (0x8576f50) Create stream I0921 12:23:11.586522 10 log.go:181] (0x8576e70) (0x8576f50) Stream added, broadcasting: 1 I0921 12:23:11.590107 10 log.go:181] (0x8576e70) Reply frame received for 1 I0921 12:23:11.590379 10 log.go:181] (0x8576e70) (0x7da4150) Create stream I0921 12:23:11.590502 10 log.go:181] (0x8576e70) (0x7da4150) Stream added, broadcasting: 3 I0921 12:23:11.592255 10 log.go:181] (0x8576e70) Reply frame received for 3 I0921 12:23:11.592396 10 log.go:181] (0x8576e70) (0x85772d0) Create stream I0921 12:23:11.592504 10 log.go:181] (0x8576e70) (0x85772d0) Stream added, broadcasting: 5 I0921 12:23:11.594070 10 log.go:181] (0x8576e70) Reply frame received for 5 I0921 12:23:11.681706 10 log.go:181] (0x8576e70) Data frame received for 3 I0921 12:23:11.681853 10 log.go:181] (0x7da4150) (3) Data frame handling I0921 12:23:11.681962 10 log.go:181] (0x7da4150) (3) Data frame sent I0921 12:23:11.682062 10 log.go:181] (0x8576e70) Data frame received for 3 I0921 12:23:11.682122 10 log.go:181] (0x7da4150) (3) Data frame handling I0921 12:23:11.682432 10 log.go:181] (0x8576e70) Data frame received for 5 I0921 12:23:11.682513 10 log.go:181] (0x85772d0) (5) Data frame handling I0921 12:23:11.683531 10 log.go:181] (0x8576e70) Data frame received for 1 I0921 12:23:11.683608 10 log.go:181] (0x8576f50) (1) Data frame handling I0921 12:23:11.683679 10 log.go:181] (0x8576f50) (1) Data frame sent I0921 12:23:11.683758 10 log.go:181] (0x8576e70) (0x8576f50) Stream removed, broadcasting: 1 I0921 12:23:11.684340 10 log.go:181] (0x8576e70) Go away received I0921 12:23:11.684623 10 log.go:181] (0x8576e70) (0x8576f50) Stream removed, broadcasting: 1 I0921 12:23:11.684766 10 log.go:181] (0x8576e70) (0x7da4150) Stream removed, broadcasting: 3 I0921 12:23:11.684871 10 log.go:181] (0x8576e70) (0x85772d0) Stream removed, broadcasting: 5 Sep 21 12:23:11.685: INFO: Waiting for responses: map[] Sep 21 12:23:11.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.5:8080/dial?request=hostname&protocol=udp&host=10.244.2.30&port=8081&tries=1'] Namespace:pod-network-test-6877 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 21 12:23:11.691: INFO: >>> kubeConfig: /root/.kube/config I0921 12:23:11.799438 10 log.go:181] (0xc39d180) (0xc39d260) Create stream I0921 12:23:11.799563 10 log.go:181] (0xc39d180) (0xc39d260) Stream added, broadcasting: 1 I0921 12:23:11.804073 10 log.go:181] (0xc39d180) Reply frame received for 1 I0921 12:23:11.804455 10 log.go:181] (0xc39d180) (0xc39d5e0) Create stream I0921 12:23:11.804580 10 log.go:181] (0xc39d180) (0xc39d5e0) Stream added, broadcasting: 3 I0921 12:23:11.806640 10 log.go:181] (0xc39d180) Reply frame received for 3 I0921 12:23:11.806812 10 log.go:181] (0xc39d180) (0xc39d8f0) Create stream I0921 12:23:11.806911 10 log.go:181] (0xc39d180) (0xc39d8f0) Stream added, broadcasting: 5 I0921 12:23:11.808655 10 log.go:181] (0xc39d180) Reply frame received for 5 I0921 12:23:11.883854 10 log.go:181] (0xc39d180) Data frame received for 3 I0921 12:23:11.884079 10 log.go:181] (0xc39d5e0) (3) Data frame handling I0921 12:23:11.884308 10 log.go:181] (0xc39d180) Data frame received for 5 I0921 12:23:11.884456 10 log.go:181] (0xc39d8f0) (5) Data frame handling I0921 12:23:11.884601 10 log.go:181] (0xc39d5e0) (3) Data frame sent I0921 12:23:11.884741 10 log.go:181] (0xc39d180) Data frame received for 3 I0921 12:23:11.884848 10 log.go:181] (0xc39d5e0) (3) Data frame handling I0921 12:23:11.885352 10 log.go:181] (0xc39d180) Data frame received for 1 I0921 12:23:11.885450 10 log.go:181] (0xc39d260) (1) Data frame handling I0921 12:23:11.885549 10 log.go:181] (0xc39d260) (1) Data frame sent I0921 12:23:11.885726 10 log.go:181] (0xc39d180) (0xc39d260) Stream removed, broadcasting: 1 I0921 12:23:11.885854 10 log.go:181] (0xc39d180) Go away received I0921 12:23:11.886117 10 log.go:181] (0xc39d180) (0xc39d260) Stream removed, broadcasting: 1 I0921 12:23:11.886210 10 log.go:181] (0xc39d180) (0xc39d5e0) Stream removed, broadcasting: 3 I0921 12:23:11.886287 10 log.go:181] (0xc39d180) (0xc39d8f0) Stream removed, broadcasting: 5 Sep 21 12:23:11.886: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:23:11.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6877" for this suite. • [SLOW TEST:24.934 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:23:11.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 21 12:23:11.998: INFO: Waiting up to 5m0s for pod "var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3" in namespace "var-expansion-6071" to be "Succeeded or Failed" Sep 21 12:23:12.005: INFO: Pod "var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636895ms Sep 21 12:23:14.081: INFO: Pod "var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082611384s Sep 21 12:23:16.088: INFO: Pod "var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089761394s STEP: Saw pod success Sep 21 12:23:16.088: INFO: Pod "var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3" satisfied condition "Succeeded or Failed" Sep 21 12:23:16.140: INFO: Trying to get logs from node kali-worker2 pod var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3 container dapi-container: STEP: delete the pod Sep 21 12:23:16.206: INFO: Waiting for pod var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3 to disappear Sep 21 12:23:16.213: INFO: Pod var-expansion-307d4dd6-7264-450f-b7c5-3f3e35e83ce3 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:23:16.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6071" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4834,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:23:16.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 21 12:23:16.316: INFO: Waiting up to 5m0s for pod "pod-480e3517-c64a-46fc-99af-a3ca09d2953c" in namespace "emptydir-3116" to be "Succeeded or Failed" Sep 21 12:23:16.336: INFO: Pod "pod-480e3517-c64a-46fc-99af-a3ca09d2953c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.409376ms Sep 21 12:23:18.374: INFO: Pod "pod-480e3517-c64a-46fc-99af-a3ca09d2953c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058042062s Sep 21 12:23:20.382: INFO: Pod "pod-480e3517-c64a-46fc-99af-a3ca09d2953c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065513528s Sep 21 12:23:22.390: INFO: Pod "pod-480e3517-c64a-46fc-99af-a3ca09d2953c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074359157s STEP: Saw pod success Sep 21 12:23:22.391: INFO: Pod "pod-480e3517-c64a-46fc-99af-a3ca09d2953c" satisfied condition "Succeeded or Failed" Sep 21 12:23:22.396: INFO: Trying to get logs from node kali-worker2 pod pod-480e3517-c64a-46fc-99af-a3ca09d2953c container test-container: STEP: delete the pod Sep 21 12:23:22.445: INFO: Waiting for pod pod-480e3517-c64a-46fc-99af-a3ca09d2953c to disappear Sep 21 12:23:22.482: INFO: Pod pod-480e3517-c64a-46fc-99af-a3ca09d2953c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:23:22.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3116" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":301,"skipped":4845,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:23:22.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 12:23:22.616: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 21 12:23:43.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1948 create -f -' Sep 21 12:23:49.016: INFO: stderr: "" Sep 21 12:23:49.017: INFO: stdout: "e2e-test-crd-publish-openapi-6344-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 21 12:23:49.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1948 delete e2e-test-crd-publish-openapi-6344-crds test-cr' Sep 21 12:23:50.217: INFO: stderr: "" Sep 21 12:23:50.217: INFO: stdout: "e2e-test-crd-publish-openapi-6344-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 21 12:23:50.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1948 apply -f -' Sep 21 12:23:53.009: INFO: stderr: "" Sep 21 12:23:53.009: INFO: stdout: "e2e-test-crd-publish-openapi-6344-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 21 12:23:53.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1948 delete e2e-test-crd-publish-openapi-6344-crds test-cr' Sep 21 12:23:54.201: INFO: stderr: "" Sep 21 12:23:54.202: INFO: stdout: "e2e-test-crd-publish-openapi-6344-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 21 12:23:54.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:46255 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6344-crds' Sep 21 12:23:57.496: INFO: stderr: "" Sep 21 12:23:57.496: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6344-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:24:18.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1948" for this suite. • [SLOW TEST:55.486 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":302,"skipped":4861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 21 12:24:18.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 21 12:24:18.127: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 21 12:24:23.135: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 21 12:24:23.136: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 21 12:24:27.220: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7196 /apis/apps/v1/namespaces/deployment-7196/deployments/test-cleanup-deployment b4a79dbb-6858-4e22-a013-180b4ae8611b 2083713 1 2020-09-21 12:24:23 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-09-21 12:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-21 12:24:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa931828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-21 12:24:23 +0000 UTC,LastTransitionTime:2020-09-21 12:24:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-09-21 12:24:26 +0000 UTC,LastTransitionTime:2020-09-21 12:24:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 21 12:24:27.227: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-7196 /apis/apps/v1/namespaces/deployment-7196/replicasets/test-cleanup-deployment-5d446bdd47 26c62f48-23f7-4551-a964-9342af4b42ca 2083702 1 2020-09-21 12:24:23 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b4a79dbb-6858-4e22-a013-180b4ae8611b 0xa931c57 0xa931c58}] [] [{kube-controller-manager Update apps/v1 2020-09-21 12:24:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b4a79dbb-6858-4e22-a013-180b4ae8611b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa931d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 21 12:24:27.235: INFO: Pod "test-cleanup-deployment-5d446bdd47-66zss" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-66zss test-cleanup-deployment-5d446bdd47- deployment-7196 /api/v1/namespaces/deployment-7196/pods/test-cleanup-deployment-5d446bdd47-66zss 77c054ad-8c3c-4420-ba2e-290fefc77a24 2083701 0 2020-09-21 12:24:23 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 26c62f48-23f7-4551-a964-9342af4b42ca 0xa96c157 0xa96c158}] [] [{kube-controller-manager Update v1 2020-09-21 12:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26c62f48-23f7-4551-a964-9342af4b42ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-21 12:24:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2rxbc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2rxbc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2rxbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 12:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 12:24:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 12:24:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-21 12:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.34,StartTime:2020-09-21 12:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-21 12:24:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://74b4390f7fd701d8e57620003cdd702183762267b929053ceb5605ec09c10d41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 21 12:24:27.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7196" for this suite. • [SLOW TEST:9.205 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":303,"skipped":4886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 21 12:24:27.256: INFO: Running AfterSuite actions on all nodes Sep 21 12:24:27.257: INFO: Running AfterSuite actions on node 1 Sep 21 12:24:27.257: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 7628.422 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS