I0907 07:27:17.311361 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0907 07:27:17.311564 7 e2e.go:129] Starting e2e run "c23436dc-1de2-4766-abbf-79def5975477" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1599463636 - Will randomize all specs Will run 303 of 5232 specs Sep 7 07:27:17.371: INFO: >>> kubeConfig: /root/.kube/config Sep 7 07:27:17.374: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 7 07:27:17.397: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 7 07:27:17.432: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 7 07:27:17.432: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 7 07:27:17.432: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 7 07:27:17.438: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 7 07:27:17.438: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 7 07:27:17.438: INFO: e2e test version: v1.19.1-rc.0 Sep 7 07:27:17.439: INFO: kube-apiserver version: v1.19.0 Sep 7 07:27:17.439: INFO: >>> kubeConfig: /root/.kube/config Sep 7 07:27:17.443: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:27:17.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Sep 7 07:27:17.650: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 7 07:27:17.659: INFO: Waiting up to 5m0s for pod "downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a" in namespace "downward-api-5722" to be "Succeeded or Failed" Sep 7 07:27:17.666: INFO: Pod "downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772134ms Sep 7 07:27:19.671: INFO: Pod "downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012361764s Sep 7 07:27:21.676: INFO: Pod "downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016945292s STEP: Saw pod success Sep 7 07:27:21.676: INFO: Pod "downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a" satisfied condition "Succeeded or Failed" Sep 7 07:27:21.679: INFO: Trying to get logs from node latest-worker pod downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a container dapi-container: STEP: delete the pod Sep 7 07:27:21.715: INFO: Waiting for pod downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a to disappear Sep 7 07:27:21.782: INFO: Pod downward-api-111bba7e-5937-40c2-bbc9-3a7bf1d8a94a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:27:21.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5722" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":3,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:27:21.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 7 07:27:22.545: INFO: Pod name wrapped-volume-race-1e5bf8a9-2867-4f8d-a5dd-3b489daba19b: Found 0 pods out of 5 Sep 7 07:27:27.553: INFO: Pod name wrapped-volume-race-1e5bf8a9-2867-4f8d-a5dd-3b489daba19b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1e5bf8a9-2867-4f8d-a5dd-3b489daba19b in namespace emptydir-wrapper-2950, will wait for the garbage collector to delete the pods Sep 7 07:27:42.007: INFO: Deleting ReplicationController wrapped-volume-race-1e5bf8a9-2867-4f8d-a5dd-3b489daba19b took: 7.617855ms Sep 7 07:27:42.507: INFO: Terminating ReplicationController wrapped-volume-race-1e5bf8a9-2867-4f8d-a5dd-3b489daba19b pods took: 500.174307ms STEP: Creating RC which spawns configmap-volume pods Sep 7 07:27:52.523: INFO: Pod name wrapped-volume-race-9e2d2694-af4e-462e-889e-823d8a0d67d9: Found 1 pods out of 5 Sep 7 07:27:57.545: INFO: Pod name wrapped-volume-race-9e2d2694-af4e-462e-889e-823d8a0d67d9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9e2d2694-af4e-462e-889e-823d8a0d67d9 in namespace emptydir-wrapper-2950, will wait for the garbage collector to delete the pods Sep 7 07:28:11.631: INFO: Deleting ReplicationController wrapped-volume-race-9e2d2694-af4e-462e-889e-823d8a0d67d9 took: 7.155901ms Sep 7 07:28:11.831: INFO: Terminating ReplicationController wrapped-volume-race-9e2d2694-af4e-462e-889e-823d8a0d67d9 pods took: 200.262395ms STEP: Creating RC which spawns configmap-volume pods Sep 7 07:28:22.622: INFO: Pod name wrapped-volume-race-707f9918-52a9-466a-9046-b0ab5669d17a: Found 0 pods out of 5 Sep 7 07:28:27.631: INFO: Pod name wrapped-volume-race-707f9918-52a9-466a-9046-b0ab5669d17a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-707f9918-52a9-466a-9046-b0ab5669d17a in namespace emptydir-wrapper-2950, will wait for the garbage collector to delete the pods Sep 7 07:28:43.715: INFO: Deleting ReplicationController wrapped-volume-race-707f9918-52a9-466a-9046-b0ab5669d17a took: 8.367509ms Sep 7 07:28:44.315: INFO: Terminating ReplicationController wrapped-volume-race-707f9918-52a9-466a-9046-b0ab5669d17a pods took: 600.317045ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:28:52.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2950" for this suite. • [SLOW TEST:91.082 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":2,"skipped":6,"failed":0} [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:28:52.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:28:53.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2005" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":3,"skipped":6,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:28:53.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:28:57.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6387" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":4,"skipped":11,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:28:57.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:28:57.201: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 7 07:28:57.208: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:28:57.274: INFO: Number of nodes with available pods: 0 Sep 7 07:28:57.274: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:28:58.281: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:28:58.284: INFO: Number of nodes with available pods: 0 Sep 7 07:28:58.284: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:28:59.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:28:59.359: INFO: Number of nodes with available pods: 0 Sep 7 07:28:59.359: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:29:00.366: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:00.640: INFO: Number of nodes with available pods: 0 Sep 7 07:29:00.640: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:29:01.334: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:01.349: INFO: Number of nodes with available pods: 1 Sep 7 07:29:01.349: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:29:02.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:02.325: INFO: Number of nodes with available pods: 2 Sep 7 07:29:02.326: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 7 07:29:02.459: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:02.459: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:02.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:03.593: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:03.593: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:03.629: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:04.471: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:04.471: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:04.471: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:04.475: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:05.471: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:05.472: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:05.472: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:05.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:06.471: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:06.471: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:06.471: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:06.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:07.471: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:07.471: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:07.471: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:07.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:08.476: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:08.476: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:08.476: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:08.479: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:09.599: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:09.599: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:09.599: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:09.604: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:10.471: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:10.471: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:10.471: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:10.510: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:11.472: INFO: Wrong image for pod: daemon-set-72bqh. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:11.472: INFO: Pod daemon-set-72bqh is not available Sep 7 07:29:11.472: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:11.477: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:12.479: INFO: Pod daemon-set-sd8wg is not available Sep 7 07:29:12.479: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:12.483: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:13.472: INFO: Pod daemon-set-sd8wg is not available Sep 7 07:29:13.472: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:13.477: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:14.503: INFO: Pod daemon-set-sd8wg is not available Sep 7 07:29:14.503: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:14.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:15.471: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:15.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:16.479: INFO: Wrong image for pod: daemon-set-zdxfm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 7 07:29:16.479: INFO: Pod daemon-set-zdxfm is not available Sep 7 07:29:16.483: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:17.471: INFO: Pod daemon-set-h8ptk is not available Sep 7 07:29:17.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 7 07:29:17.480: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:17.483: INFO: Number of nodes with available pods: 1 Sep 7 07:29:17.483: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:29:18.488: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:18.492: INFO: Number of nodes with available pods: 1 Sep 7 07:29:18.492: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:29:19.489: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:19.493: INFO: Number of nodes with available pods: 1 Sep 7 07:29:19.493: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:29:20.488: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:29:20.492: INFO: Number of nodes with available pods: 2 Sep 7 07:29:20.492: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6689, will wait for the garbage collector to delete the pods Sep 7 07:29:20.585: INFO: Deleting DaemonSet.extensions daemon-set took: 6.259517ms Sep 7 07:29:20.985: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.216536ms Sep 7 07:29:32.294: INFO: Number of nodes with available pods: 0 Sep 7 07:29:32.294: INFO: Number of running nodes: 0, number of available pods: 0 Sep 7 07:29:32.319: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6689/daemonsets","resourceVersion":"267976"},"items":null} Sep 7 07:29:32.378: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6689/pods","resourceVersion":"267978"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:29:32.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6689" for this suite. • [SLOW TEST:35.294 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":5,"skipped":14,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:29:32.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 7 07:29:32.561: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 7 07:29:32.573: INFO: Waiting for terminating namespaces to be deleted... Sep 7 07:29:32.575: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 7 07:29:32.579: INFO: rally-161fcef7-5xbxwnt1-5bs7t from c-rally-161fcef7-6ny0tr7x started at 2020-09-07 07:29:32 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.579: INFO: Container rally-161fcef7-5xbxwnt1 ready: false, restart count 0 Sep 7 07:29:32.579: INFO: kindnet-d72xf from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.579: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 07:29:32.579: INFO: kube-proxy-64mm6 from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.579: INFO: Container kube-proxy ready: true, restart count 0 Sep 7 07:29:32.579: INFO: busybox-host-aliasesc83b78c7-1993-4de6-975f-ef2d494f3223 from kubelet-test-6387 started at 2020-09-07 07:28:53 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.579: INFO: Container busybox-host-aliasesc83b78c7-1993-4de6-975f-ef2d494f3223 ready: true, restart count 0 Sep 7 07:29:32.579: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 7 07:29:32.584: INFO: rally-161fcef7-5xbxwnt1-rh2cl from c-rally-161fcef7-6ny0tr7x started at 2020-09-07 07:29:27 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.584: INFO: Container rally-161fcef7-5xbxwnt1 ready: true, restart count 0 Sep 7 07:29:32.584: INFO: kindnet-dktmm from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.584: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 07:29:32.584: INFO: kube-proxy-b55gf from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 07:29:32.584: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-651f8b1d-8915-4441-aefe-23917355f3d9 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-651f8b1d-8915-4441-aefe-23917355f3d9 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-651f8b1d-8915-4441-aefe-23917355f3d9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:29:51.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5394" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:19.022 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":6,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:29:51.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-k8vx STEP: Creating a pod to test atomic-volume-subpath Sep 7 07:29:51.557: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k8vx" in namespace "subpath-6496" to be "Succeeded or Failed" Sep 7 07:29:51.566: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Pending", Reason="", readiness=false. Elapsed: 9.499493ms Sep 7 07:29:53.570: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01373663s Sep 7 07:29:55.575: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 4.018420416s Sep 7 07:29:57.578: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 6.021352555s Sep 7 07:29:59.582: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 8.025153426s Sep 7 07:30:01.586: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 10.029143198s Sep 7 07:30:03.591: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 12.033856997s Sep 7 07:30:05.595: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 14.038451622s Sep 7 07:30:07.599: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 16.042050069s Sep 7 07:30:09.604: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 18.047089009s Sep 7 07:30:11.609: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 20.052105371s Sep 7 07:30:13.613: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Running", Reason="", readiness=true. Elapsed: 22.05675328s Sep 7 07:30:15.618: INFO: Pod "pod-subpath-test-configmap-k8vx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061367437s STEP: Saw pod success Sep 7 07:30:15.618: INFO: Pod "pod-subpath-test-configmap-k8vx" satisfied condition "Succeeded or Failed" Sep 7 07:30:15.621: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-k8vx container test-container-subpath-configmap-k8vx: STEP: delete the pod Sep 7 07:30:15.657: INFO: Waiting for pod pod-subpath-test-configmap-k8vx to disappear Sep 7 07:30:15.663: INFO: Pod pod-subpath-test-configmap-k8vx no longer exists STEP: Deleting pod pod-subpath-test-configmap-k8vx Sep 7 07:30:15.663: INFO: Deleting pod "pod-subpath-test-configmap-k8vx" in namespace "subpath-6496" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:30:15.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6496" for this suite. • [SLOW TEST:24.225 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":7,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:30:15.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:30:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4589" for this suite. • [SLOW TEST:16.239 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":8,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:30:31.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:30:40.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8053" for this suite. • [SLOW TEST:8.118 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:30:40.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-c4e6e11e-0212-40bb-ada7-b303b73f0bc1 in namespace container-probe-4735 Sep 7 07:30:44.264: INFO: Started pod busybox-c4e6e11e-0212-40bb-ada7-b303b73f0bc1 in namespace container-probe-4735 STEP: checking the pod's current state and verifying that restartCount is present Sep 7 07:30:44.267: INFO: Initial restart count of pod busybox-c4e6e11e-0212-40bb-ada7-b303b73f0bc1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:34:45.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4735" for this suite. • [SLOW TEST:245.627 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:34:45.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 07:34:45.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed" in namespace "downward-api-5243" to be "Succeeded or Failed" Sep 7 07:34:45.841: INFO: Pod "downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.458169ms Sep 7 07:34:47.846: INFO: Pod "downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008355534s Sep 7 07:34:49.850: INFO: Pod "downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013126333s STEP: Saw pod success Sep 7 07:34:49.850: INFO: Pod "downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed" satisfied condition "Succeeded or Failed" Sep 7 07:34:49.853: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed container client-container: STEP: delete the pod Sep 7 07:34:49.965: INFO: Waiting for pod downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed to disappear Sep 7 07:34:49.979: INFO: Pod downwardapi-volume-6178b116-bb8f-4954-ab73-4372a12b65ed no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:34:49.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5243" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":152,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:34:49.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 7 07:34:50.030: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 7 07:34:50.037: INFO: Waiting for terminating namespaces to be deleted... Sep 7 07:34:50.039: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 7 07:34:50.043: INFO: rally-4c3ddf07-wyvdeejq from c-rally-4c3ddf07-p9hsod9p started at 2020-09-07 07:34:41 +0000 UTC (1 container statuses recorded) Sep 7 07:34:50.043: INFO: Container rally-4c3ddf07-wyvdeejq ready: true, restart count 0 Sep 7 07:34:50.043: INFO: kindnet-d72xf from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 07:34:50.043: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 07:34:50.043: INFO: kube-proxy-64mm6 from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 07:34:50.043: INFO: Container kube-proxy ready: true, restart count 0 Sep 7 07:34:50.043: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 7 07:34:50.047: INFO: kindnet-dktmm from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 07:34:50.047: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 07:34:50.047: INFO: kube-proxy-b55gf from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 07:34:50.047: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f3f530c0-09f2-4800-8c14-60e31c302d13 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-f3f530c0-09f2-4800-8c14-60e31c302d13 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f3f530c0-09f2-4800-8c14-60e31c302d13 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:39:58.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6620" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.425 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":12,"skipped":160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:39:58.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:39:58.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7521" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":183,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:39:58.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-948d4579-c4c7-47ab-abb2-382750dc70d7 STEP: Creating a pod to test consume configMaps Sep 7 07:39:58.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a" in namespace "configmap-5764" to be "Succeeded or Failed" Sep 7 07:39:58.741: INFO: Pod "pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148023ms Sep 7 07:40:00.746: INFO: Pod "pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010436633s Sep 7 07:40:02.749: INFO: Pod "pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014217306s STEP: Saw pod success Sep 7 07:40:02.749: INFO: Pod "pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a" satisfied condition "Succeeded or Failed" Sep 7 07:40:02.752: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a container configmap-volume-test: STEP: delete the pod Sep 7 07:40:02.797: INFO: Waiting for pod pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a to disappear Sep 7 07:40:02.801: INFO: Pod pod-configmaps-c55430d9-fd25-4846-91f0-2de97e193b8a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:40:02.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5764" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:40:02.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da in namespace container-probe-9936 Sep 7 07:40:08.932: INFO: Started pod liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da in namespace container-probe-9936 STEP: checking the pod's current state and verifying that restartCount is present Sep 7 07:40:08.934: INFO: Initial restart count of pod liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da is 0 Sep 7 07:40:28.997: INFO: Restart count of pod container-probe-9936/liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da is now 1 (20.062734068s elapsed) Sep 7 07:40:49.042: INFO: Restart count of pod container-probe-9936/liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da is now 2 (40.107536787s elapsed) Sep 7 07:41:09.124: INFO: Restart count of pod container-probe-9936/liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da is now 3 (1m0.189578107s elapsed) Sep 7 07:41:29.167: INFO: Restart count of pod container-probe-9936/liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da is now 4 (1m20.233391678s elapsed) Sep 7 07:42:37.345: INFO: Restart count of pod container-probe-9936/liveness-5222af06-8fbc-4d1e-a922-8e7d180dc8da is now 5 (2m28.41137515s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:42:37.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9936" for this suite. • [SLOW TEST:154.594 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:42:37.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:42:37.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3920" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":16,"skipped":239,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:42:37.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 7 07:42:38.216: INFO: Waiting up to 5m0s for pod "downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47" in namespace "downward-api-9255" to be "Succeeded or Failed" Sep 7 07:42:38.300: INFO: Pod "downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47": Phase="Pending", Reason="", readiness=false. Elapsed: 84.546874ms Sep 7 07:42:40.342: INFO: Pod "downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126054478s Sep 7 07:42:42.346: INFO: Pod "downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129934118s STEP: Saw pod success Sep 7 07:42:42.346: INFO: Pod "downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47" satisfied condition "Succeeded or Failed" Sep 7 07:42:42.348: INFO: Trying to get logs from node latest-worker2 pod downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47 container dapi-container: STEP: delete the pod Sep 7 07:42:42.391: INFO: Waiting for pod downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47 to disappear Sep 7 07:42:42.409: INFO: Pod downward-api-0b17137a-a714-4d19-b7b1-3da3eca6da47 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:42:42.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9255" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:42:42.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3028 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3028 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3028 Sep 7 07:42:42.617: INFO: Found 0 stateful pods, waiting for 1 Sep 7 07:42:52.623: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 7 07:42:52.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 07:42:55.863: INFO: stderr: "I0907 07:42:55.715708 28 log.go:181] (0xc00003a420) (0xc000668000) Create stream\nI0907 07:42:55.715771 28 log.go:181] (0xc00003a420) (0xc000668000) Stream added, broadcasting: 1\nI0907 07:42:55.720745 28 log.go:181] (0xc00003a420) Reply frame received for 1\nI0907 07:42:55.720794 28 log.go:181] (0xc00003a420) (0xc000a40280) Create stream\nI0907 07:42:55.720808 28 log.go:181] (0xc00003a420) (0xc000a40280) Stream added, broadcasting: 3\nI0907 07:42:55.721939 28 log.go:181] (0xc00003a420) Reply frame received for 3\nI0907 07:42:55.721976 28 log.go:181] (0xc00003a420) (0xc0006680a0) Create stream\nI0907 07:42:55.721991 28 log.go:181] (0xc00003a420) (0xc0006680a0) Stream added, broadcasting: 5\nI0907 07:42:55.723088 28 log.go:181] (0xc00003a420) Reply frame received for 5\nI0907 07:42:55.827190 28 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 07:42:55.827218 28 log.go:181] (0xc0006680a0) (5) Data frame handling\nI0907 07:42:55.827236 28 log.go:181] (0xc0006680a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 07:42:55.856353 28 log.go:181] (0xc00003a420) Data frame received for 3\nI0907 07:42:55.856406 28 log.go:181] (0xc000a40280) (3) Data frame handling\nI0907 07:42:55.856449 28 log.go:181] (0xc000a40280) (3) Data frame sent\nI0907 07:42:55.856530 28 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 07:42:55.856552 28 log.go:181] (0xc0006680a0) (5) Data frame handling\nI0907 07:42:55.856572 28 log.go:181] (0xc00003a420) Data frame received for 3\nI0907 07:42:55.856583 28 log.go:181] (0xc000a40280) (3) Data frame handling\nI0907 07:42:55.858757 28 log.go:181] (0xc00003a420) Data frame received for 1\nI0907 07:42:55.858786 28 log.go:181] (0xc000668000) (1) Data frame handling\nI0907 07:42:55.858803 28 log.go:181] (0xc000668000) (1) Data frame sent\nI0907 07:42:55.858814 28 log.go:181] (0xc00003a420) (0xc000668000) Stream removed, broadcasting: 1\nI0907 07:42:55.858828 28 log.go:181] (0xc00003a420) Go away received\nI0907 07:42:55.859205 28 log.go:181] (0xc00003a420) (0xc000668000) Stream removed, broadcasting: 1\nI0907 07:42:55.859231 28 log.go:181] (0xc00003a420) (0xc000a40280) Stream removed, broadcasting: 3\nI0907 07:42:55.859250 28 log.go:181] (0xc00003a420) (0xc0006680a0) Stream removed, broadcasting: 5\n" Sep 7 07:42:55.864: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 07:42:55.864: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 07:42:55.867: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 7 07:43:05.888: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 7 07:43:05.888: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 07:43:05.900: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999396s Sep 7 07:43:06.906: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99622157s Sep 7 07:43:07.911: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990633139s Sep 7 07:43:08.916: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986134902s Sep 7 07:43:09.920: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980671896s Sep 7 07:43:10.926: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976918373s Sep 7 07:43:11.947: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970400851s Sep 7 07:43:12.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.949182847s Sep 7 07:43:13.965: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.943272157s Sep 7 07:43:14.970: INFO: Verifying statefulset ss doesn't scale past 1 for another 931.438029ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3028 Sep 7 07:43:15.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 07:43:16.255: INFO: stderr: "I0907 07:43:16.145207 47 log.go:181] (0xc000e293f0) (0xc000914640) Create stream\nI0907 07:43:16.145260 47 log.go:181] (0xc000e293f0) (0xc000914640) Stream added, broadcasting: 1\nI0907 07:43:16.150531 47 log.go:181] (0xc000e293f0) Reply frame received for 1\nI0907 07:43:16.150596 47 log.go:181] (0xc000e293f0) (0xc000f06000) Create stream\nI0907 07:43:16.150620 47 log.go:181] (0xc000e293f0) (0xc000f06000) Stream added, broadcasting: 3\nI0907 07:43:16.151862 47 log.go:181] (0xc000e293f0) Reply frame received for 3\nI0907 07:43:16.151915 47 log.go:181] (0xc000e293f0) (0xc000914000) Create stream\nI0907 07:43:16.151930 47 log.go:181] (0xc000e293f0) (0xc000914000) Stream added, broadcasting: 5\nI0907 07:43:16.152973 47 log.go:181] (0xc000e293f0) Reply frame received for 5\nI0907 07:43:16.248532 47 log.go:181] (0xc000e293f0) Data frame received for 3\nI0907 07:43:16.248581 47 log.go:181] (0xc000f06000) (3) Data frame handling\nI0907 07:43:16.248602 47 log.go:181] (0xc000f06000) (3) Data frame sent\nI0907 07:43:16.248618 47 log.go:181] (0xc000e293f0) Data frame received for 3\nI0907 07:43:16.248629 47 log.go:181] (0xc000f06000) (3) Data frame handling\nI0907 07:43:16.248666 47 log.go:181] (0xc000e293f0) Data frame received for 5\nI0907 07:43:16.248693 47 log.go:181] (0xc000914000) (5) Data frame handling\nI0907 07:43:16.248723 47 log.go:181] (0xc000914000) (5) Data frame sent\nI0907 07:43:16.248741 47 log.go:181] (0xc000e293f0) Data frame received for 5\nI0907 07:43:16.248752 47 log.go:181] (0xc000914000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 07:43:16.250135 47 log.go:181] (0xc000e293f0) Data frame received for 1\nI0907 07:43:16.250158 47 log.go:181] (0xc000914640) (1) Data frame handling\nI0907 07:43:16.250169 47 log.go:181] (0xc000914640) (1) Data frame sent\nI0907 07:43:16.250180 47 log.go:181] (0xc000e293f0) (0xc000914640) Stream removed, broadcasting: 1\nI0907 07:43:16.250192 47 log.go:181] (0xc000e293f0) Go away received\nI0907 07:43:16.250846 47 log.go:181] (0xc000e293f0) (0xc000914640) Stream removed, broadcasting: 1\nI0907 07:43:16.250878 47 log.go:181] (0xc000e293f0) (0xc000f06000) Stream removed, broadcasting: 3\nI0907 07:43:16.250898 47 log.go:181] (0xc000e293f0) (0xc000914000) Stream removed, broadcasting: 5\n" Sep 7 07:43:16.255: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 07:43:16.255: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 07:43:16.259: INFO: Found 1 stateful pods, waiting for 3 Sep 7 07:43:26.265: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 07:43:26.265: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 07:43:26.265: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 7 07:43:26.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 07:43:26.538: INFO: stderr: "I0907 07:43:26.418870 66 log.go:181] (0xc0006b3130) (0xc0005d28c0) Create stream\nI0907 07:43:26.418936 66 log.go:181] (0xc0006b3130) (0xc0005d28c0) Stream added, broadcasting: 1\nI0907 07:43:26.424502 66 log.go:181] (0xc0006b3130) Reply frame received for 1\nI0907 07:43:26.424546 66 log.go:181] (0xc0006b3130) (0xc0005d2000) Create stream\nI0907 07:43:26.424558 66 log.go:181] (0xc0006b3130) (0xc0005d2000) Stream added, broadcasting: 3\nI0907 07:43:26.425650 66 log.go:181] (0xc0006b3130) Reply frame received for 3\nI0907 07:43:26.425741 66 log.go:181] (0xc0006b3130) (0xc00089a140) Create stream\nI0907 07:43:26.425777 66 log.go:181] (0xc0006b3130) (0xc00089a140) Stream added, broadcasting: 5\nI0907 07:43:26.426694 66 log.go:181] (0xc0006b3130) Reply frame received for 5\nI0907 07:43:26.531430 66 log.go:181] (0xc0006b3130) Data frame received for 3\nI0907 07:43:26.531464 66 log.go:181] (0xc0005d2000) (3) Data frame handling\nI0907 07:43:26.531472 66 log.go:181] (0xc0005d2000) (3) Data frame sent\nI0907 07:43:26.531479 66 log.go:181] (0xc0006b3130) Data frame received for 3\nI0907 07:43:26.531484 66 log.go:181] (0xc0005d2000) (3) Data frame handling\nI0907 07:43:26.531507 66 log.go:181] (0xc0006b3130) Data frame received for 5\nI0907 07:43:26.531513 66 log.go:181] (0xc00089a140) (5) Data frame handling\nI0907 07:43:26.531520 66 log.go:181] (0xc00089a140) (5) Data frame sent\nI0907 07:43:26.531526 66 log.go:181] (0xc0006b3130) Data frame received for 5\nI0907 07:43:26.531532 66 log.go:181] (0xc00089a140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 07:43:26.533198 66 log.go:181] (0xc0006b3130) Data frame received for 1\nI0907 07:43:26.533211 66 log.go:181] (0xc0005d28c0) (1) Data frame handling\nI0907 07:43:26.533231 66 log.go:181] (0xc0005d28c0) (1) Data frame sent\nI0907 07:43:26.533331 66 log.go:181] (0xc0006b3130) (0xc0005d28c0) Stream removed, broadcasting: 1\nI0907 07:43:26.533780 66 log.go:181] (0xc0006b3130) (0xc0005d28c0) Stream removed, broadcasting: 1\nI0907 07:43:26.533808 66 log.go:181] (0xc0006b3130) (0xc0005d2000) Stream removed, broadcasting: 3\nI0907 07:43:26.533820 66 log.go:181] (0xc0006b3130) (0xc00089a140) Stream removed, broadcasting: 5\n" Sep 7 07:43:26.538: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 07:43:26.538: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 07:43:26.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 07:43:26.791: INFO: stderr: "I0907 07:43:26.677954 84 log.go:181] (0xc000e11760) (0xc0005a8960) Create stream\nI0907 07:43:26.678001 84 log.go:181] (0xc000e11760) (0xc0005a8960) Stream added, broadcasting: 1\nI0907 07:43:26.686211 84 log.go:181] (0xc000e11760) Reply frame received for 1\nI0907 07:43:26.686273 84 log.go:181] (0xc000e11760) (0xc0005a8000) Create stream\nI0907 07:43:26.686287 84 log.go:181] (0xc000e11760) (0xc0005a8000) Stream added, broadcasting: 3\nI0907 07:43:26.687450 84 log.go:181] (0xc000e11760) Reply frame received for 3\nI0907 07:43:26.687491 84 log.go:181] (0xc000e11760) (0xc0005a80a0) Create stream\nI0907 07:43:26.687502 84 log.go:181] (0xc000e11760) (0xc0005a80a0) Stream added, broadcasting: 5\nI0907 07:43:26.688467 84 log.go:181] (0xc000e11760) Reply frame received for 5\nI0907 07:43:26.758165 84 log.go:181] (0xc000e11760) Data frame received for 5\nI0907 07:43:26.758193 84 log.go:181] (0xc0005a80a0) (5) Data frame handling\nI0907 07:43:26.758213 84 log.go:181] (0xc0005a80a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 07:43:26.783326 84 log.go:181] (0xc000e11760) Data frame received for 3\nI0907 07:43:26.783341 84 log.go:181] (0xc0005a8000) (3) Data frame handling\nI0907 07:43:26.783347 84 log.go:181] (0xc0005a8000) (3) Data frame sent\nI0907 07:43:26.783557 84 log.go:181] (0xc000e11760) Data frame received for 5\nI0907 07:43:26.783569 84 log.go:181] (0xc0005a80a0) (5) Data frame handling\nI0907 07:43:26.783690 84 log.go:181] (0xc000e11760) Data frame received for 3\nI0907 07:43:26.783705 84 log.go:181] (0xc0005a8000) (3) Data frame handling\nI0907 07:43:26.785863 84 log.go:181] (0xc000e11760) Data frame received for 1\nI0907 07:43:26.785901 84 log.go:181] (0xc0005a8960) (1) Data frame handling\nI0907 07:43:26.785925 84 log.go:181] (0xc0005a8960) (1) Data frame sent\nI0907 07:43:26.785940 84 log.go:181] (0xc000e11760) (0xc0005a8960) Stream removed, broadcasting: 1\nI0907 07:43:26.785988 84 log.go:181] (0xc000e11760) Go away received\nI0907 07:43:26.786389 84 log.go:181] (0xc000e11760) (0xc0005a8960) Stream removed, broadcasting: 1\nI0907 07:43:26.786410 84 log.go:181] (0xc000e11760) (0xc0005a8000) Stream removed, broadcasting: 3\nI0907 07:43:26.786421 84 log.go:181] (0xc000e11760) (0xc0005a80a0) Stream removed, broadcasting: 5\n" Sep 7 07:43:26.791: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 07:43:26.791: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 07:43:26.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 07:43:27.047: INFO: stderr: "I0907 07:43:26.920074 102 log.go:181] (0xc0008c8fd0) (0xc000390c80) Create stream\nI0907 07:43:26.920126 102 log.go:181] (0xc0008c8fd0) (0xc000390c80) Stream added, broadcasting: 1\nI0907 07:43:26.924755 102 log.go:181] (0xc0008c8fd0) Reply frame received for 1\nI0907 07:43:26.924798 102 log.go:181] (0xc0008c8fd0) (0xc000208aa0) Create stream\nI0907 07:43:26.924811 102 log.go:181] (0xc0008c8fd0) (0xc000208aa0) Stream added, broadcasting: 3\nI0907 07:43:26.925839 102 log.go:181] (0xc0008c8fd0) Reply frame received for 3\nI0907 07:43:26.925904 102 log.go:181] (0xc0008c8fd0) (0xc0003903c0) Create stream\nI0907 07:43:26.925935 102 log.go:181] (0xc0008c8fd0) (0xc0003903c0) Stream added, broadcasting: 5\nI0907 07:43:26.926913 102 log.go:181] (0xc0008c8fd0) Reply frame received for 5\nI0907 07:43:26.980410 102 log.go:181] (0xc0008c8fd0) Data frame received for 5\nI0907 07:43:26.980438 102 log.go:181] (0xc0003903c0) (5) Data frame handling\nI0907 07:43:26.980455 102 log.go:181] (0xc0003903c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 07:43:27.040119 102 log.go:181] (0xc0008c8fd0) Data frame received for 3\nI0907 07:43:27.040156 102 log.go:181] (0xc000208aa0) (3) Data frame handling\nI0907 07:43:27.040179 102 log.go:181] (0xc000208aa0) (3) Data frame sent\nI0907 07:43:27.041100 102 log.go:181] (0xc0008c8fd0) Data frame received for 3\nI0907 07:43:27.041133 102 log.go:181] (0xc000208aa0) (3) Data frame handling\nI0907 07:43:27.041431 102 log.go:181] (0xc0008c8fd0) Data frame received for 5\nI0907 07:43:27.041457 102 log.go:181] (0xc0003903c0) (5) Data frame handling\nI0907 07:43:27.043531 102 log.go:181] (0xc0008c8fd0) Data frame received for 1\nI0907 07:43:27.043562 102 log.go:181] (0xc000390c80) (1) Data frame handling\nI0907 07:43:27.043572 102 log.go:181] (0xc000390c80) (1) Data frame sent\nI0907 07:43:27.043589 102 log.go:181] (0xc0008c8fd0) (0xc000390c80) Stream removed, broadcasting: 1\nI0907 07:43:27.043613 102 log.go:181] (0xc0008c8fd0) Go away received\nI0907 07:43:27.044092 102 log.go:181] (0xc0008c8fd0) (0xc000390c80) Stream removed, broadcasting: 1\nI0907 07:43:27.044119 102 log.go:181] (0xc0008c8fd0) (0xc000208aa0) Stream removed, broadcasting: 3\nI0907 07:43:27.044127 102 log.go:181] (0xc0008c8fd0) (0xc0003903c0) Stream removed, broadcasting: 5\n" Sep 7 07:43:27.047: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 07:43:27.047: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 07:43:27.047: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 07:43:27.067: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Sep 7 07:43:37.077: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 7 07:43:37.077: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 7 07:43:37.077: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 7 07:43:37.096: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999686s Sep 7 07:43:38.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987770923s Sep 7 07:43:39.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981932493s Sep 7 07:43:40.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975941685s Sep 7 07:43:41.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97089432s Sep 7 07:43:42.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965052838s Sep 7 07:43:43.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95923675s Sep 7 07:43:44.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95228747s Sep 7 07:43:45.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945746936s Sep 7 07:43:46.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 941.282227ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3028 Sep 7 07:43:47.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 07:43:47.390: INFO: stderr: "I0907 07:43:47.294426 120 log.go:181] (0xc000d9f600) (0xc0006b48c0) Create stream\nI0907 07:43:47.294482 120 log.go:181] (0xc000d9f600) (0xc0006b48c0) Stream added, broadcasting: 1\nI0907 07:43:47.299216 120 log.go:181] (0xc000d9f600) Reply frame received for 1\nI0907 07:43:47.299269 120 log.go:181] (0xc000d9f600) (0xc0007c2320) Create stream\nI0907 07:43:47.299298 120 log.go:181] (0xc000d9f600) (0xc0007c2320) Stream added, broadcasting: 3\nI0907 07:43:47.302420 120 log.go:181] (0xc000d9f600) Reply frame received for 3\nI0907 07:43:47.302451 120 log.go:181] (0xc000d9f600) (0xc0007c2000) Create stream\nI0907 07:43:47.302463 120 log.go:181] (0xc000d9f600) (0xc0007c2000) Stream added, broadcasting: 5\nI0907 07:43:47.303190 120 log.go:181] (0xc000d9f600) Reply frame received for 5\nI0907 07:43:47.385524 120 log.go:181] (0xc000d9f600) Data frame received for 5\nI0907 07:43:47.385579 120 log.go:181] (0xc0007c2000) (5) Data frame handling\nI0907 07:43:47.385611 120 log.go:181] (0xc0007c2000) (5) Data frame sent\nI0907 07:43:47.385637 120 log.go:181] (0xc000d9f600) Data frame received for 5\nI0907 07:43:47.385650 120 log.go:181] (0xc0007c2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 07:43:47.385667 120 log.go:181] (0xc000d9f600) Data frame received for 3\nI0907 07:43:47.385767 120 log.go:181] (0xc0007c2320) (3) Data frame handling\nI0907 07:43:47.385819 120 log.go:181] (0xc0007c2320) (3) Data frame sent\nI0907 07:43:47.385834 120 log.go:181] (0xc000d9f600) Data frame received for 3\nI0907 07:43:47.385844 120 log.go:181] (0xc0007c2320) (3) Data frame handling\nI0907 07:43:47.386993 120 log.go:181] (0xc000d9f600) Data frame received for 1\nI0907 07:43:47.387019 120 log.go:181] (0xc0006b48c0) (1) Data frame handling\nI0907 07:43:47.387038 120 log.go:181] (0xc0006b48c0) (1) Data frame sent\nI0907 07:43:47.387207 120 log.go:181] (0xc000d9f600) (0xc0006b48c0) Stream removed, broadcasting: 1\nI0907 07:43:47.387356 120 log.go:181] (0xc000d9f600) Go away received\nI0907 07:43:47.387626 120 log.go:181] (0xc000d9f600) (0xc0006b48c0) Stream removed, broadcasting: 1\nI0907 07:43:47.387641 120 log.go:181] (0xc000d9f600) (0xc0007c2320) Stream removed, broadcasting: 3\nI0907 07:43:47.387647 120 log.go:181] (0xc000d9f600) (0xc0007c2000) Stream removed, broadcasting: 5\n" Sep 7 07:43:47.391: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 07:43:47.391: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 07:43:47.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 07:43:47.616: INFO: stderr: "I0907 07:43:47.527216 139 log.go:181] (0xc000f80fd0) (0xc000ea8780) Create stream\nI0907 07:43:47.527278 139 log.go:181] (0xc000f80fd0) (0xc000ea8780) Stream added, broadcasting: 1\nI0907 07:43:47.532856 139 log.go:181] (0xc000f80fd0) Reply frame received for 1\nI0907 07:43:47.532907 139 log.go:181] (0xc000f80fd0) (0xc000cb6000) Create stream\nI0907 07:43:47.532923 139 log.go:181] (0xc000f80fd0) (0xc000cb6000) Stream added, broadcasting: 3\nI0907 07:43:47.533986 139 log.go:181] (0xc000f80fd0) Reply frame received for 3\nI0907 07:43:47.534034 139 log.go:181] (0xc000f80fd0) (0xc000ea8000) Create stream\nI0907 07:43:47.534060 139 log.go:181] (0xc000f80fd0) (0xc000ea8000) Stream added, broadcasting: 5\nI0907 07:43:47.534885 139 log.go:181] (0xc000f80fd0) Reply frame received for 5\nI0907 07:43:47.609958 139 log.go:181] (0xc000f80fd0) Data frame received for 5\nI0907 07:43:47.609994 139 log.go:181] (0xc000ea8000) (5) Data frame handling\nI0907 07:43:47.610013 139 log.go:181] (0xc000ea8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 07:43:47.610346 139 log.go:181] (0xc000f80fd0) Data frame received for 3\nI0907 07:43:47.610374 139 log.go:181] (0xc000cb6000) (3) Data frame handling\nI0907 07:43:47.610393 139 log.go:181] (0xc000cb6000) (3) Data frame sent\nI0907 07:43:47.610533 139 log.go:181] (0xc000f80fd0) Data frame received for 3\nI0907 07:43:47.610554 139 log.go:181] (0xc000cb6000) (3) Data frame handling\nI0907 07:43:47.611043 139 log.go:181] (0xc000f80fd0) Data frame received for 5\nI0907 07:43:47.611065 139 log.go:181] (0xc000ea8000) (5) Data frame handling\nI0907 07:43:47.612524 139 log.go:181] (0xc000f80fd0) Data frame received for 1\nI0907 07:43:47.612556 139 log.go:181] (0xc000ea8780) (1) Data frame handling\nI0907 07:43:47.612574 139 log.go:181] (0xc000ea8780) (1) Data frame sent\nI0907 07:43:47.612597 139 log.go:181] (0xc000f80fd0) (0xc000ea8780) Stream removed, broadcasting: 1\nI0907 07:43:47.612632 139 log.go:181] (0xc000f80fd0) Go away received\nI0907 07:43:47.613008 139 log.go:181] (0xc000f80fd0) (0xc000ea8780) Stream removed, broadcasting: 1\nI0907 07:43:47.613024 139 log.go:181] (0xc000f80fd0) (0xc000cb6000) Stream removed, broadcasting: 3\nI0907 07:43:47.613031 139 log.go:181] (0xc000f80fd0) (0xc000ea8000) Stream removed, broadcasting: 5\n" Sep 7 07:43:47.617: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 07:43:47.617: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 07:43:47.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 07:43:47.832: INFO: stderr: "I0907 07:43:47.758198 157 log.go:181] (0xc0001e1550) (0xc00056ebe0) Create stream\nI0907 07:43:47.758250 157 log.go:181] (0xc0001e1550) (0xc00056ebe0) Stream added, broadcasting: 1\nI0907 07:43:47.761366 157 log.go:181] (0xc0001e1550) Reply frame received for 1\nI0907 07:43:47.761407 157 log.go:181] (0xc0001e1550) (0xc000b8e000) Create stream\nI0907 07:43:47.761420 157 log.go:181] (0xc0001e1550) (0xc000b8e000) Stream added, broadcasting: 3\nI0907 07:43:47.762386 157 log.go:181] (0xc0001e1550) Reply frame received for 3\nI0907 07:43:47.762426 157 log.go:181] (0xc0001e1550) (0xc000d86280) Create stream\nI0907 07:43:47.762441 157 log.go:181] (0xc0001e1550) (0xc000d86280) Stream added, broadcasting: 5\nI0907 07:43:47.763457 157 log.go:181] (0xc0001e1550) Reply frame received for 5\nI0907 07:43:47.825146 157 log.go:181] (0xc0001e1550) Data frame received for 3\nI0907 07:43:47.825186 157 log.go:181] (0xc000b8e000) (3) Data frame handling\nI0907 07:43:47.825215 157 log.go:181] (0xc000b8e000) (3) Data frame sent\nI0907 07:43:47.825230 157 log.go:181] (0xc0001e1550) Data frame received for 3\nI0907 07:43:47.825241 157 log.go:181] (0xc000b8e000) (3) Data frame handling\nI0907 07:43:47.825741 157 log.go:181] (0xc0001e1550) Data frame received for 5\nI0907 07:43:47.825776 157 log.go:181] (0xc000d86280) (5) Data frame handling\nI0907 07:43:47.825818 157 log.go:181] (0xc000d86280) (5) Data frame sent\nI0907 07:43:47.825838 157 log.go:181] (0xc0001e1550) Data frame received for 5\nI0907 07:43:47.825851 157 log.go:181] (0xc000d86280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 07:43:47.827192 157 log.go:181] (0xc0001e1550) Data frame received for 1\nI0907 07:43:47.827219 157 log.go:181] (0xc00056ebe0) (1) Data frame handling\nI0907 07:43:47.827241 157 log.go:181] (0xc00056ebe0) (1) Data frame sent\nI0907 07:43:47.827266 157 log.go:181] (0xc0001e1550) (0xc00056ebe0) Stream removed, broadcasting: 1\nI0907 07:43:47.827300 157 log.go:181] (0xc0001e1550) Go away received\nI0907 07:43:47.827699 157 log.go:181] (0xc0001e1550) (0xc00056ebe0) Stream removed, broadcasting: 1\nI0907 07:43:47.827733 157 log.go:181] (0xc0001e1550) (0xc000b8e000) Stream removed, broadcasting: 3\nI0907 07:43:47.827748 157 log.go:181] (0xc0001e1550) (0xc000d86280) Stream removed, broadcasting: 5\n" Sep 7 07:43:47.832: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 07:43:47.832: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 07:43:47.832: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 7 07:44:07.852: INFO: Deleting all statefulset in ns statefulset-3028 Sep 7 07:44:07.855: INFO: Scaling statefulset ss to 0 Sep 7 07:44:07.867: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 07:44:07.869: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:44:07.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3028" for this suite. • [SLOW TEST:85.451 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":18,"skipped":282,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:44:07.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 07:44:07.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a" in namespace "projected-6376" to be "Succeeded or Failed" Sep 7 07:44:07.969: INFO: Pod "downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752204ms Sep 7 07:44:09.975: INFO: Pod "downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012679165s Sep 7 07:44:11.979: INFO: Pod "downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017027651s STEP: Saw pod success Sep 7 07:44:11.979: INFO: Pod "downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a" satisfied condition "Succeeded or Failed" Sep 7 07:44:11.982: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a container client-container: STEP: delete the pod Sep 7 07:44:12.063: INFO: Waiting for pod downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a to disappear Sep 7 07:44:12.072: INFO: Pod downwardapi-volume-42f7e2d7-48bf-459e-b01a-be844721896a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:44:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6376" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":290,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:44:12.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:45:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3217" for this suite. • [SLOW TEST:60.115 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":296,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:45:12.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 7 07:45:12.287: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:45:29.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5477" for this suite. • [SLOW TEST:17.358 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":21,"skipped":307,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:45:29.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 7 07:45:29.651: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 7 07:45:29.666: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 7 07:45:29.666: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 7 07:45:29.679: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 7 07:45:29.679: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 7 07:45:29.734: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 7 07:45:29.734: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 7 07:45:37.251: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:45:37.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1515" for this suite. • [SLOW TEST:7.769 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":22,"skipped":322,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:45:37.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 7 07:45:37.807: INFO: Waiting up to 5m0s for pod "pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108" in namespace "emptydir-7923" to be "Succeeded or Failed" Sep 7 07:45:37.836: INFO: Pod "pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108": Phase="Pending", Reason="", readiness=false. Elapsed: 29.011126ms Sep 7 07:45:39.840: INFO: Pod "pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032718001s Sep 7 07:45:41.844: INFO: Pod "pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037095501s STEP: Saw pod success Sep 7 07:45:41.844: INFO: Pod "pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108" satisfied condition "Succeeded or Failed" Sep 7 07:45:41.847: INFO: Trying to get logs from node latest-worker2 pod pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108 container test-container: STEP: delete the pod Sep 7 07:45:41.947: INFO: Waiting for pod pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108 to disappear Sep 7 07:45:41.985: INFO: Pod pod-7d51f1f3-a28b-4537-8772-cf02e8ec6108 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:45:41.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7923" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":326,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:45:41.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 07:45:43.286: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 07:45:45.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:45:47.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061543, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 07:45:50.813: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:00.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7059" for this suite. STEP: Destroying namespace "webhook-7059-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.087 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":24,"skipped":331,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:01.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:46:01.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config version' Sep 7 07:46:01.319: INFO: stderr: "" Sep 7 07:46:01.319: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.1-rc.0\", GitCommit:\"945f4d7267dedfa22337d3705c510f0e3612ace6\", GitTreeState:\"clean\", BuildDate:\"2020-08-26T14:49:55Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:01.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4403" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":25,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:01.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 7 07:46:06.067: INFO: Successfully updated pod "annotationupdated8debd31-ac88-44f5-8ff7-c4ea6784e423" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:10.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2401" for this suite. • [SLOW TEST:8.829 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":26,"skipped":368,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:10.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:46:10.328: INFO: Create a RollingUpdate DaemonSet Sep 7 07:46:10.334: INFO: Check that daemon pods launch on every node of the cluster Sep 7 07:46:10.350: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:10.375: INFO: Number of nodes with available pods: 0 Sep 7 07:46:10.375: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:46:11.381: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:11.384: INFO: Number of nodes with available pods: 0 Sep 7 07:46:11.384: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:46:12.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:12.515: INFO: Number of nodes with available pods: 0 Sep 7 07:46:12.515: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:46:13.407: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:13.411: INFO: Number of nodes with available pods: 0 Sep 7 07:46:13.411: INFO: Node latest-worker is running more than one daemon pod Sep 7 07:46:14.382: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:14.386: INFO: Number of nodes with available pods: 1 Sep 7 07:46:14.386: INFO: Node latest-worker2 is running more than one daemon pod Sep 7 07:46:15.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:15.400: INFO: Number of nodes with available pods: 2 Sep 7 07:46:15.400: INFO: Number of running nodes: 2, number of available pods: 2 Sep 7 07:46:15.400: INFO: Update the DaemonSet to trigger a rollout Sep 7 07:46:15.407: INFO: Updating DaemonSet daemon-set Sep 7 07:46:22.460: INFO: Roll back the DaemonSet before rollout is complete Sep 7 07:46:22.468: INFO: Updating DaemonSet daemon-set Sep 7 07:46:22.468: INFO: Make sure DaemonSet rollback is complete Sep 7 07:46:22.478: INFO: Wrong image for pod: daemon-set-sgfmf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 7 07:46:22.479: INFO: Pod daemon-set-sgfmf is not available Sep 7 07:46:22.485: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:23.490: INFO: Wrong image for pod: daemon-set-sgfmf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 7 07:46:23.490: INFO: Pod daemon-set-sgfmf is not available Sep 7 07:46:23.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:24.490: INFO: Wrong image for pod: daemon-set-sgfmf. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 7 07:46:24.490: INFO: Pod daemon-set-sgfmf is not available Sep 7 07:46:24.494: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 07:46:25.490: INFO: Pod daemon-set-q2ms4 is not available Sep 7 07:46:25.495: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6982, will wait for the garbage collector to delete the pods Sep 7 07:46:25.559: INFO: Deleting DaemonSet.extensions daemon-set took: 5.926544ms Sep 7 07:46:25.959: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.253066ms Sep 7 07:46:28.863: INFO: Number of nodes with available pods: 0 Sep 7 07:46:28.863: INFO: Number of running nodes: 0, number of available pods: 0 Sep 7 07:46:28.867: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6982/daemonsets","resourceVersion":"272790"},"items":null} Sep 7 07:46:28.869: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6982/pods","resourceVersion":"272790"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:28.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6982" for this suite. • [SLOW TEST:18.727 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":27,"skipped":374,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:28.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 07:46:29.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b" in namespace "projected-1749" to be "Succeeded or Failed" Sep 7 07:46:29.003: INFO: Pod "downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343517ms Sep 7 07:46:31.008: INFO: Pod "downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007619324s Sep 7 07:46:33.013: INFO: Pod "downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012667734s STEP: Saw pod success Sep 7 07:46:33.014: INFO: Pod "downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b" satisfied condition "Succeeded or Failed" Sep 7 07:46:33.017: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b container client-container: STEP: delete the pod Sep 7 07:46:33.059: INFO: Waiting for pod downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b to disappear Sep 7 07:46:33.071: INFO: Pod downwardapi-volume-f2e97d6b-0d2a-4568-bfa4-e780d6b8df9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:33.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1749" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":28,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:33.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 7 07:46:41.392: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:41.430: INFO: Pod pod-with-poststart-http-hook still exists Sep 7 07:46:43.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:43.434: INFO: Pod pod-with-poststart-http-hook still exists Sep 7 07:46:45.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:45.442: INFO: Pod pod-with-poststart-http-hook still exists Sep 7 07:46:47.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:47.434: INFO: Pod pod-with-poststart-http-hook still exists Sep 7 07:46:49.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:49.435: INFO: Pod pod-with-poststart-http-hook still exists Sep 7 07:46:51.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:51.435: INFO: Pod pod-with-poststart-http-hook still exists Sep 7 07:46:53.430: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 7 07:46:53.435: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9806" for this suite. • [SLOW TEST:20.364 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":395,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:53.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 7 07:46:53.519: INFO: Waiting up to 5m0s for pod "client-containers-f4e57701-5681-4da5-9088-1a481a983d2a" in namespace "containers-1853" to be "Succeeded or Failed" Sep 7 07:46:53.523: INFO: Pod "client-containers-f4e57701-5681-4da5-9088-1a481a983d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.771755ms Sep 7 07:46:55.548: INFO: Pod "client-containers-f4e57701-5681-4da5-9088-1a481a983d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028647542s Sep 7 07:46:57.553: INFO: Pod "client-containers-f4e57701-5681-4da5-9088-1a481a983d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033567757s STEP: Saw pod success Sep 7 07:46:57.553: INFO: Pod "client-containers-f4e57701-5681-4da5-9088-1a481a983d2a" satisfied condition "Succeeded or Failed" Sep 7 07:46:57.556: INFO: Trying to get logs from node latest-worker pod client-containers-f4e57701-5681-4da5-9088-1a481a983d2a container test-container: STEP: delete the pod Sep 7 07:46:57.611: INFO: Waiting for pod client-containers-f4e57701-5681-4da5-9088-1a481a983d2a to disappear Sep 7 07:46:57.622: INFO: Pod client-containers-f4e57701-5681-4da5-9088-1a481a983d2a no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:46:57.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1853" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":405,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:46:57.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 7 07:46:57.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3815' Sep 7 07:46:57.784: INFO: stderr: "" Sep 7 07:46:57.784: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 7 07:46:57.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3815' Sep 7 07:47:10.893: INFO: stderr: "" Sep 7 07:47:10.893: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:47:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3815" for this suite. • [SLOW TEST:13.297 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":31,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:47:10.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:47:10.992: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 7 07:47:13.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 create -f -' Sep 7 07:47:18.317: INFO: stderr: "" Sep 7 07:47:18.317: INFO: stdout: "e2e-test-crd-publish-openapi-1637-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 7 07:47:18.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 delete e2e-test-crd-publish-openapi-1637-crds test-foo' Sep 7 07:47:18.444: INFO: stderr: "" Sep 7 07:47:18.444: INFO: stdout: "e2e-test-crd-publish-openapi-1637-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 7 07:47:18.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 apply -f -' Sep 7 07:47:18.758: INFO: stderr: "" Sep 7 07:47:18.758: INFO: stdout: "e2e-test-crd-publish-openapi-1637-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 7 07:47:18.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 delete e2e-test-crd-publish-openapi-1637-crds test-foo' Sep 7 07:47:18.892: INFO: stderr: "" Sep 7 07:47:18.892: INFO: stdout: "e2e-test-crd-publish-openapi-1637-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 7 07:47:18.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 create -f -' Sep 7 07:47:19.188: INFO: rc: 1 Sep 7 07:47:19.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 apply -f -' Sep 7 07:47:19.466: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 7 07:47:19.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 create -f -' Sep 7 07:47:19.744: INFO: rc: 1 Sep 7 07:47:19.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-998 apply -f -' Sep 7 07:47:20.010: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 7 07:47:20.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1637-crds' Sep 7 07:47:20.294: INFO: stderr: "" Sep 7 07:47:20.294: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1637-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 7 07:47:20.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1637-crds.metadata' Sep 7 07:47:20.572: INFO: stderr: "" Sep 7 07:47:20.572: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1637-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 7 07:47:20.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1637-crds.spec' Sep 7 07:47:20.878: INFO: stderr: "" Sep 7 07:47:20.878: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1637-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 7 07:47:20.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1637-crds.spec.bars' Sep 7 07:47:21.164: INFO: stderr: "" Sep 7 07:47:21.164: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1637-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 7 07:47:21.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1637-crds.spec.bars2' Sep 7 07:47:21.435: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:47:24.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-998" for this suite. • [SLOW TEST:13.535 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":32,"skipped":445,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:47:24.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 7 07:47:24.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273115 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 07:47:24.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273115 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 7 07:47:34.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273147 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 07:47:34.585: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273147 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 7 07:47:44.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273177 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 07:47:44.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273177 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 7 07:47:54.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273207 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 07:47:54.607: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-a e95af577-627b-4ed0-9e65-2618a5733df2 273207 0 2020-09-07 07:47:24 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-07 07:47:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 7 07:48:04.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-b d72bc2bc-46ae-4057-ba18-ffa2a157736f 273237 0 2020-09-07 07:48:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-07 07:48:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 07:48:04.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-b d72bc2bc-46ae-4057-ba18-ffa2a157736f 273237 0 2020-09-07 07:48:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-07 07:48:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 7 07:48:14.623: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-b d72bc2bc-46ae-4057-ba18-ffa2a157736f 273267 0 2020-09-07 07:48:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-07 07:48:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 07:48:14.624: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7181 /api/v1/namespaces/watch-7181/configmaps/e2e-watch-test-configmap-b d72bc2bc-46ae-4057-ba18-ffa2a157736f 273267 0 2020-09-07 07:48:04 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-07 07:48:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:48:24.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7181" for this suite. • [SLOW TEST:60.172 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":33,"skipped":449,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:48:24.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:48:24.844: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Pending, waiting for it to be Running (with Ready = true) Sep 7 07:48:26.850: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Pending, waiting for it to be Running (with Ready = true) Sep 7 07:48:28.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:30.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:32.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:34.847: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:36.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:38.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:40.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:42.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:44.893: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = false) Sep 7 07:48:46.849: INFO: The status of Pod test-webserver-8db27fdc-9aab-4bc9-a1fe-4226b1cf9723 is Running (Ready = true) Sep 7 07:48:46.852: INFO: Container started at 2020-09-07 07:48:27 +0000 UTC, pod became ready at 2020-09-07 07:48:45 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:48:46.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3103" for this suite. • [SLOW TEST:22.226 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":462,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:48:46.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:48:46.969: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 7 07:48:51.982: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 7 07:48:51.983: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 7 07:48:53.989: INFO: Creating deployment "test-rollover-deployment" Sep 7 07:48:54.003: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 7 07:48:56.014: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 7 07:48:56.020: INFO: Ensure that both replica sets have 1 created replica Sep 7 07:48:56.026: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 7 07:48:56.032: INFO: Updating deployment test-rollover-deployment Sep 7 07:48:56.032: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 7 07:48:58.052: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 7 07:48:58.057: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 7 07:48:58.063: INFO: all replica sets need to contain the pod-template-hash label Sep 7 07:48:58.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061736, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:00.071: INFO: all replica sets need to contain the pod-template-hash label Sep 7 07:49:00.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061739, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:02.070: INFO: all replica sets need to contain the pod-template-hash label Sep 7 07:49:02.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061739, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:04.071: INFO: all replica sets need to contain the pod-template-hash label Sep 7 07:49:04.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061739, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:06.072: INFO: all replica sets need to contain the pod-template-hash label Sep 7 07:49:06.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061739, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:08.073: INFO: all replica sets need to contain the pod-template-hash label Sep 7 07:49:08.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061739, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:10.124: INFO: Sep 7 07:49:10.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061750, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061734, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 07:49:12.072: INFO: Sep 7 07:49:12.072: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 7 07:49:12.081: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-750 /apis/apps/v1/namespaces/deployment-750/deployments/test-rollover-deployment 9426783d-7714-4650-8ca9-ae24a5bacb08 273542 2 2020-09-07 07:48:53 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-07 07:48:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 07:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00495d9f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-07 07:48:54 +0000 UTC,LastTransitionTime:2020-09-07 07:48:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-09-07 07:49:10 +0000 UTC,LastTransitionTime:2020-09-07 07:48:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 7 07:49:12.084: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-750 /apis/apps/v1/namespaces/deployment-750/replicasets/test-rollover-deployment-5797c7764 75d468dd-eb94-48b1-b970-c6a03387b3d2 273531 2 2020-09-07 07:48:56 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9426783d-7714-4650-8ca9-ae24a5bacb08 0xc0048efcc0 0xc0048efcc1}] [] [{kube-controller-manager Update apps/v1 2020-09-07 07:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9426783d-7714-4650-8ca9-ae24a5bacb08\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048efd48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 7 07:49:12.084: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 7 07:49:12.085: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-750 /apis/apps/v1/namespaces/deployment-750/replicasets/test-rollover-controller 7041ccbf-a2ba-42d6-9b57-c58899760190 273541 2 2020-09-07 07:48:46 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9426783d-7714-4650-8ca9-ae24a5bacb08 0xc0048efb37 0xc0048efb38}] [] [{e2e.test Update apps/v1 2020-09-07 07:48:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 07:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9426783d-7714-4650-8ca9-ae24a5bacb08\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048efc08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 07:49:12.085: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-750 /apis/apps/v1/namespaces/deployment-750/replicasets/test-rollover-deployment-78bc8b888c 678bb379-862f-4e0d-a896-ca5d83dbff8b 273480 2 2020-09-07 07:48:54 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9426783d-7714-4650-8ca9-ae24a5bacb08 0xc0048efdc7 0xc0048efdc8}] [] [{kube-controller-manager Update apps/v1 2020-09-07 07:48:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9426783d-7714-4650-8ca9-ae24a5bacb08\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048efe68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 07:49:12.089: INFO: Pod "test-rollover-deployment-5797c7764-hfpjg" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-hfpjg test-rollover-deployment-5797c7764- deployment-750 /api/v1/namespaces/deployment-750/pods/test-rollover-deployment-5797c7764-hfpjg 4c4dc426-ed33-4cea-86e0-f0a1d6c73ede 273498 0 2020-09-07 07:48:56 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 75d468dd-eb94-48b1-b970-c6a03387b3d2 0xc004a4c500 0xc004a4c501}] [] [{kube-controller-manager Update v1 2020-09-07 07:48:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75d468dd-eb94-48b1-b970-c6a03387b3d2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 07:48:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mn6hm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mn6hm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mn6hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:48:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:48:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:48:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.12,StartTime:2020-09-07 07:48:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 07:48:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://e7dd12d05c066a30cebb79c440766a7ace1efaee31d101631156684427947b6f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:49:12.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-750" for this suite. • [SLOW TEST:25.236 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":35,"skipped":465,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:49:12.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 7 07:49:12.186: INFO: PodSpec: initContainers in spec.initContainers Sep 7 07:50:00.098: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-54a0b687-b231-4d49-bdf6-8df22f10fe18", GenerateName:"", Namespace:"init-container-6056", SelfLink:"/api/v1/namespaces/init-container-6056/pods/pod-init-54a0b687-b231-4d49-bdf6-8df22f10fe18", UID:"12a90234-2126-4e14-8f04-5a43dd181d69", ResourceVersion:"273758", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735061752, loc:(*time.Location)(0x7702840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"186944179"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004616220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004616240)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004616260), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004616280)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pxn4q", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006988100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pxn4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pxn4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pxn4q", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004b162c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00085e230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b16350)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b16370)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004b16378), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004b1637c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003c9c060), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061752, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061752, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061752, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735061752, loc:(*time.Location)(0x7702840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.1.251", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.251"}}, StartTime:(*v1.Time)(0xc0046162a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00085e380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00085e3f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7e22ac7e41f15dc1f35c835aa31dfd11f849913cbb66a9bcec969b7694f3e6be", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0046162e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0046162c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004b163ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:00.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6056" for this suite. • [SLOW TEST:48.097 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":36,"skipped":472,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:00.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:50:00.327: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9a82b68a-bb80-4f1a-aa03-d754c895b345" in namespace "security-context-test-8849" to be "Succeeded or Failed" Sep 7 07:50:00.373: INFO: Pod "busybox-readonly-false-9a82b68a-bb80-4f1a-aa03-d754c895b345": Phase="Pending", Reason="", readiness=false. Elapsed: 45.36272ms Sep 7 07:50:02.377: INFO: Pod "busybox-readonly-false-9a82b68a-bb80-4f1a-aa03-d754c895b345": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049436293s Sep 7 07:50:04.381: INFO: Pod "busybox-readonly-false-9a82b68a-bb80-4f1a-aa03-d754c895b345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053801613s Sep 7 07:50:04.381: INFO: Pod "busybox-readonly-false-9a82b68a-bb80-4f1a-aa03-d754c895b345" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:04.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8849" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":37,"skipped":478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:04.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 7 07:50:04.610: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:12.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4497" for this suite. • [SLOW TEST:7.760 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":38,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:12.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5463475a-c2e5-48e6-9e99-1421ff1fde62 STEP: Creating a pod to test consume secrets Sep 7 07:50:12.260: INFO: Waiting up to 5m0s for pod "pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17" in namespace "secrets-9796" to be "Succeeded or Failed" Sep 7 07:50:12.270: INFO: Pod "pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 9.698081ms Sep 7 07:50:14.295: INFO: Pod "pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035433906s Sep 7 07:50:16.300: INFO: Pod "pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040477086s STEP: Saw pod success Sep 7 07:50:16.300: INFO: Pod "pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17" satisfied condition "Succeeded or Failed" Sep 7 07:50:16.304: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17 container secret-volume-test: STEP: delete the pod Sep 7 07:50:16.601: INFO: Waiting for pod pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17 to disappear Sep 7 07:50:16.671: INFO: Pod pod-secrets-6add2d5d-6d78-4355-8f15-774c0104cf17 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:16.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9796" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":542,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:16.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 7 07:50:16.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7506' Sep 7 07:50:16.894: INFO: stderr: "" Sep 7 07:50:16.894: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 7 07:50:16.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-7506' Sep 7 07:50:16.993: INFO: stderr: "" Sep 7 07:50:16.993: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-07T07:50:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-07T07:50:16Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-07T07:50:16Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7506\",\n \"resourceVersion\": \"273890\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7506/pods/e2e-test-httpd-pod\",\n \"uid\": \"77faede4-d843-432e-a192-27384479fcd1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6c2h9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6c2h9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6c2h9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T07:50:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T07:50:16Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T07:50:16Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T07:50:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-07T07:50:16Z\"\n }\n}\n" Sep 7 07:50:16.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-7506' Sep 7 07:50:17.358: INFO: stderr: "W0907 07:50:17.067261 503 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Sep 7 07:50:17.358: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 7 07:50:17.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7506' Sep 7 07:50:19.153: INFO: stderr: "" Sep 7 07:50:19.153: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:19.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7506" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":40,"skipped":560,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:19.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:50:19.210: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 7 07:50:22.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 create -f -' Sep 7 07:50:26.105: INFO: stderr: "" Sep 7 07:50:26.105: INFO: stdout: "e2e-test-crd-publish-openapi-9077-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 7 07:50:26.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 delete e2e-test-crd-publish-openapi-9077-crds test-cr' Sep 7 07:50:26.224: INFO: stderr: "" Sep 7 07:50:26.224: INFO: stdout: "e2e-test-crd-publish-openapi-9077-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 7 07:50:26.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 apply -f -' Sep 7 07:50:26.541: INFO: stderr: "" Sep 7 07:50:26.541: INFO: stdout: "e2e-test-crd-publish-openapi-9077-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 7 07:50:26.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 delete e2e-test-crd-publish-openapi-9077-crds test-cr' Sep 7 07:50:26.652: INFO: stderr: "" Sep 7 07:50:26.652: INFO: stdout: "e2e-test-crd-publish-openapi-9077-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 7 07:50:26.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9077-crds' Sep 7 07:50:26.964: INFO: stderr: "" Sep 7 07:50:26.964: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9077-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:29.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4643" for this suite. • [SLOW TEST:10.754 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":41,"skipped":570,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-656dc6a4-c9e2-4a1d-b4aa-5b706870dcde STEP: Creating a pod to test consume configMaps Sep 7 07:50:30.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a" in namespace "projected-8008" to be "Succeeded or Failed" Sep 7 07:50:30.067: INFO: Pod "pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.860031ms Sep 7 07:50:32.072: INFO: Pod "pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037690371s Sep 7 07:50:34.080: INFO: Pod "pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a": Phase="Running", Reason="", readiness=true. Elapsed: 4.045732904s Sep 7 07:50:36.084: INFO: Pod "pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049681922s STEP: Saw pod success Sep 7 07:50:36.084: INFO: Pod "pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a" satisfied condition "Succeeded or Failed" Sep 7 07:50:36.086: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a container projected-configmap-volume-test: STEP: delete the pod Sep 7 07:50:36.134: INFO: Waiting for pod pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a to disappear Sep 7 07:50:36.150: INFO: Pod pod-projected-configmaps-aea01e53-2b78-418e-85c1-fe853602b34a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:50:36.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8008" for this suite. • [SLOW TEST:6.249 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":576,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:50:36.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-qqmv STEP: Creating a pod to test atomic-volume-subpath Sep 7 07:50:36.314: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qqmv" in namespace "subpath-730" to be "Succeeded or Failed" Sep 7 07:50:36.317: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606575ms Sep 7 07:50:38.322: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008591893s Sep 7 07:50:40.326: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 4.012614855s Sep 7 07:50:42.331: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 6.017367899s Sep 7 07:50:44.336: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 8.022414319s Sep 7 07:50:46.339: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 10.025892203s Sep 7 07:50:48.346: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 12.032705932s Sep 7 07:50:50.352: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 14.038441097s Sep 7 07:50:52.356: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 16.042262246s Sep 7 07:50:54.359: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 18.045582246s Sep 7 07:50:56.362: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 20.048788062s Sep 7 07:50:58.367: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Running", Reason="", readiness=true. Elapsed: 22.053059664s Sep 7 07:51:00.371: INFO: Pod "pod-subpath-test-projected-qqmv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05702718s STEP: Saw pod success Sep 7 07:51:00.371: INFO: Pod "pod-subpath-test-projected-qqmv" satisfied condition "Succeeded or Failed" Sep 7 07:51:00.374: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-qqmv container test-container-subpath-projected-qqmv: STEP: delete the pod Sep 7 07:51:00.636: INFO: Waiting for pod pod-subpath-test-projected-qqmv to disappear Sep 7 07:51:00.639: INFO: Pod pod-subpath-test-projected-qqmv no longer exists STEP: Deleting pod pod-subpath-test-projected-qqmv Sep 7 07:51:00.639: INFO: Deleting pod "pod-subpath-test-projected-qqmv" in namespace "subpath-730" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:51:00.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-730" for this suite. • [SLOW TEST:24.482 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":43,"skipped":578,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:51:00.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:51:00.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 7 07:51:01.344: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-07T07:51:01Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-07T07:51:01Z]] name:name1 resourceVersion:274135 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f52bf176-dfca-43c4-a012-6efecd9d8fd8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 7 07:51:11.355: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-07T07:51:11Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-07T07:51:11Z]] name:name2 resourceVersion:274174 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c8199ddb-7fc0-4d90-991e-3c2915a23498] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 7 07:51:21.362: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-07T07:51:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-07T07:51:21Z]] name:name1 resourceVersion:274203 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f52bf176-dfca-43c4-a012-6efecd9d8fd8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 7 07:51:31.371: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-07T07:51:11Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-07T07:51:31Z]] name:name2 resourceVersion:274233 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c8199ddb-7fc0-4d90-991e-3c2915a23498] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 7 07:51:41.380: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-07T07:51:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-07T07:51:21Z]] name:name1 resourceVersion:274263 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f52bf176-dfca-43c4-a012-6efecd9d8fd8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 7 07:51:51.389: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-07T07:51:11Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-07T07:51:31Z]] name:name2 resourceVersion:274293 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c8199ddb-7fc0-4d90-991e-3c2915a23498] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:52:01.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4145" for this suite. • [SLOW TEST:61.260 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":44,"skipped":585,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:52:01.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 7 07:52:02.040: INFO: >>> kubeConfig: /root/.kube/config Sep 7 07:52:04.988: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:52:16.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9031" for this suite. • [SLOW TEST:14.893 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":45,"skipped":591,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:52:16.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:52:16.865: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:52:20.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-324" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":597,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:52:20.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2865 Sep 7 07:52:25.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 7 07:52:25.324: INFO: stderr: "I0907 07:52:25.227690 629 log.go:181] (0xc00026c000) (0xc0007bc000) Create stream\nI0907 07:52:25.227750 629 log.go:181] (0xc00026c000) (0xc0007bc000) Stream added, broadcasting: 1\nI0907 07:52:25.229407 629 log.go:181] (0xc00026c000) Reply frame received for 1\nI0907 07:52:25.229466 629 log.go:181] (0xc00026c000) (0xc000e0c000) Create stream\nI0907 07:52:25.229491 629 log.go:181] (0xc00026c000) (0xc000e0c000) Stream added, broadcasting: 3\nI0907 07:52:25.230444 629 log.go:181] (0xc00026c000) Reply frame received for 3\nI0907 07:52:25.230486 629 log.go:181] (0xc00026c000) (0xc0007bc0a0) Create stream\nI0907 07:52:25.230502 629 log.go:181] (0xc00026c000) (0xc0007bc0a0) Stream added, broadcasting: 5\nI0907 07:52:25.231540 629 log.go:181] (0xc00026c000) Reply frame received for 5\nI0907 07:52:25.310637 629 log.go:181] (0xc00026c000) Data frame received for 5\nI0907 07:52:25.310657 629 log.go:181] (0xc0007bc0a0) (5) Data frame handling\nI0907 07:52:25.310668 629 log.go:181] (0xc0007bc0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0907 07:52:25.317087 629 log.go:181] (0xc00026c000) Data frame received for 3\nI0907 07:52:25.317108 629 log.go:181] (0xc000e0c000) (3) Data frame handling\nI0907 07:52:25.317128 629 log.go:181] (0xc000e0c000) (3) Data frame sent\nI0907 07:52:25.317436 629 log.go:181] (0xc00026c000) Data frame received for 3\nI0907 07:52:25.317467 629 log.go:181] (0xc000e0c000) (3) Data frame handling\nI0907 07:52:25.317579 629 log.go:181] (0xc00026c000) Data frame received for 5\nI0907 07:52:25.317613 629 log.go:181] (0xc0007bc0a0) (5) Data frame handling\nI0907 07:52:25.319370 629 log.go:181] (0xc00026c000) Data frame received for 1\nI0907 07:52:25.319400 629 log.go:181] (0xc0007bc000) (1) Data frame handling\nI0907 07:52:25.319408 629 log.go:181] (0xc0007bc000) (1) Data frame sent\nI0907 07:52:25.319421 629 log.go:181] (0xc00026c000) (0xc0007bc000) Stream removed, broadcasting: 1\nI0907 07:52:25.319438 629 log.go:181] (0xc00026c000) Go away received\nI0907 07:52:25.319708 629 log.go:181] (0xc00026c000) (0xc0007bc000) Stream removed, broadcasting: 1\nI0907 07:52:25.319727 629 log.go:181] (0xc00026c000) (0xc000e0c000) Stream removed, broadcasting: 3\nI0907 07:52:25.319735 629 log.go:181] (0xc00026c000) (0xc0007bc0a0) Stream removed, broadcasting: 5\n" Sep 7 07:52:25.324: INFO: stdout: "iptables" Sep 7 07:52:25.324: INFO: proxyMode: iptables Sep 7 07:52:25.329: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 07:52:25.369: INFO: Pod kube-proxy-mode-detector still exists Sep 7 07:52:27.369: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 07:52:27.373: INFO: Pod kube-proxy-mode-detector still exists Sep 7 07:52:29.369: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 07:52:29.373: INFO: Pod kube-proxy-mode-detector still exists Sep 7 07:52:31.369: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 07:52:31.373: INFO: Pod kube-proxy-mode-detector still exists Sep 7 07:52:33.369: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 07:52:33.373: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-2865 STEP: creating replication controller affinity-nodeport-timeout in namespace services-2865 I0907 07:52:33.428903 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2865, replica count: 3 I0907 07:52:36.479392 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 07:52:39.479710 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 07:52:39.490: INFO: Creating new exec pod Sep 7 07:52:44.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 7 07:52:44.761: INFO: stderr: "I0907 07:52:44.667811 648 log.go:181] (0xc0007e8dc0) (0xc000115f40) Create stream\nI0907 07:52:44.667860 648 log.go:181] (0xc0007e8dc0) (0xc000115f40) Stream added, broadcasting: 1\nI0907 07:52:44.672514 648 log.go:181] (0xc0007e8dc0) Reply frame received for 1\nI0907 07:52:44.672560 648 log.go:181] (0xc0007e8dc0) (0xc0001140a0) Create stream\nI0907 07:52:44.672573 648 log.go:181] (0xc0007e8dc0) (0xc0001140a0) Stream added, broadcasting: 3\nI0907 07:52:44.673591 648 log.go:181] (0xc0007e8dc0) Reply frame received for 3\nI0907 07:52:44.673614 648 log.go:181] (0xc0007e8dc0) (0xc000114820) Create stream\nI0907 07:52:44.673623 648 log.go:181] (0xc0007e8dc0) (0xc000114820) Stream added, broadcasting: 5\nI0907 07:52:44.674561 648 log.go:181] (0xc0007e8dc0) Reply frame received for 5\nI0907 07:52:44.754699 648 log.go:181] (0xc0007e8dc0) Data frame received for 5\nI0907 07:52:44.754832 648 log.go:181] (0xc000114820) (5) Data frame handling\nI0907 07:52:44.754877 648 log.go:181] (0xc000114820) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0907 07:52:44.755129 648 log.go:181] (0xc0007e8dc0) Data frame received for 5\nI0907 07:52:44.755167 648 log.go:181] (0xc000114820) (5) Data frame handling\nI0907 07:52:44.755198 648 log.go:181] (0xc000114820) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0907 07:52:44.755326 648 log.go:181] (0xc0007e8dc0) Data frame received for 5\nI0907 07:52:44.755346 648 log.go:181] (0xc000114820) (5) Data frame handling\nI0907 07:52:44.755372 648 log.go:181] (0xc0007e8dc0) Data frame received for 3\nI0907 07:52:44.755387 648 log.go:181] (0xc0001140a0) (3) Data frame handling\nI0907 07:52:44.757477 648 log.go:181] (0xc0007e8dc0) Data frame received for 1\nI0907 07:52:44.757499 648 log.go:181] (0xc000115f40) (1) Data frame handling\nI0907 07:52:44.757512 648 log.go:181] (0xc000115f40) (1) Data frame sent\nI0907 07:52:44.757537 648 log.go:181] (0xc0007e8dc0) (0xc000115f40) Stream removed, broadcasting: 1\nI0907 07:52:44.757570 648 log.go:181] (0xc0007e8dc0) Go away received\nI0907 07:52:44.757911 648 log.go:181] (0xc0007e8dc0) (0xc000115f40) Stream removed, broadcasting: 1\nI0907 07:52:44.757925 648 log.go:181] (0xc0007e8dc0) (0xc0001140a0) Stream removed, broadcasting: 3\nI0907 07:52:44.757931 648 log.go:181] (0xc0007e8dc0) (0xc000114820) Stream removed, broadcasting: 5\n" Sep 7 07:52:44.761: INFO: stdout: "" Sep 7 07:52:44.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c nc -zv -t -w 2 10.107.56.30 80' Sep 7 07:52:44.960: INFO: stderr: "I0907 07:52:44.885183 666 log.go:181] (0xc00003a420) (0xc00088ca00) Create stream\nI0907 07:52:44.885259 666 log.go:181] (0xc00003a420) (0xc00088ca00) Stream added, broadcasting: 1\nI0907 07:52:44.888245 666 log.go:181] (0xc00003a420) Reply frame received for 1\nI0907 07:52:44.888360 666 log.go:181] (0xc00003a420) (0xc00088d180) Create stream\nI0907 07:52:44.888423 666 log.go:181] (0xc00003a420) (0xc00088d180) Stream added, broadcasting: 3\nI0907 07:52:44.889944 666 log.go:181] (0xc00003a420) Reply frame received for 3\nI0907 07:52:44.889974 666 log.go:181] (0xc00003a420) (0xc000a39f40) Create stream\nI0907 07:52:44.889985 666 log.go:181] (0xc00003a420) (0xc000a39f40) Stream added, broadcasting: 5\nI0907 07:52:44.890782 666 log.go:181] (0xc00003a420) Reply frame received for 5\nI0907 07:52:44.952908 666 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 07:52:44.952953 666 log.go:181] (0xc000a39f40) (5) Data frame handling\nI0907 07:52:44.952989 666 log.go:181] (0xc000a39f40) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.56.30 80\nI0907 07:52:44.954380 666 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 07:52:44.954408 666 log.go:181] (0xc000a39f40) (5) Data frame handling\nI0907 07:52:44.954440 666 log.go:181] (0xc000a39f40) (5) Data frame sent\nConnection to 10.107.56.30 80 port [tcp/http] succeeded!\nI0907 07:52:44.954894 666 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 07:52:44.954922 666 log.go:181] (0xc00003a420) Data frame received for 3\nI0907 07:52:44.954957 666 log.go:181] (0xc00088d180) (3) Data frame handling\nI0907 07:52:44.954985 666 log.go:181] (0xc000a39f40) (5) Data frame handling\nI0907 07:52:44.956963 666 log.go:181] (0xc00003a420) Data frame received for 1\nI0907 07:52:44.956989 666 log.go:181] (0xc00088ca00) (1) Data frame handling\nI0907 07:52:44.957002 666 log.go:181] (0xc00088ca00) (1) Data frame sent\nI0907 07:52:44.957020 666 log.go:181] (0xc00003a420) (0xc00088ca00) Stream removed, broadcasting: 1\nI0907 07:52:44.957045 666 log.go:181] (0xc00003a420) Go away received\nI0907 07:52:44.957408 666 log.go:181] (0xc00003a420) (0xc00088ca00) Stream removed, broadcasting: 1\nI0907 07:52:44.957424 666 log.go:181] (0xc00003a420) (0xc00088d180) Stream removed, broadcasting: 3\nI0907 07:52:44.957431 666 log.go:181] (0xc00003a420) (0xc000a39f40) Stream removed, broadcasting: 5\n" Sep 7 07:52:44.960: INFO: stdout: "" Sep 7 07:52:44.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32728' Sep 7 07:52:45.154: INFO: stderr: "I0907 07:52:45.085969 684 log.go:181] (0xc0002f80b0) (0xc000d02000) Create stream\nI0907 07:52:45.086034 684 log.go:181] (0xc0002f80b0) (0xc000d02000) Stream added, broadcasting: 1\nI0907 07:52:45.087769 684 log.go:181] (0xc0002f80b0) Reply frame received for 1\nI0907 07:52:45.087822 684 log.go:181] (0xc0002f80b0) (0xc000aa8140) Create stream\nI0907 07:52:45.087848 684 log.go:181] (0xc0002f80b0) (0xc000aa8140) Stream added, broadcasting: 3\nI0907 07:52:45.088897 684 log.go:181] (0xc0002f80b0) Reply frame received for 3\nI0907 07:52:45.088922 684 log.go:181] (0xc0002f80b0) (0xc000d020a0) Create stream\nI0907 07:52:45.088930 684 log.go:181] (0xc0002f80b0) (0xc000d020a0) Stream added, broadcasting: 5\nI0907 07:52:45.089803 684 log.go:181] (0xc0002f80b0) Reply frame received for 5\nI0907 07:52:45.148454 684 log.go:181] (0xc0002f80b0) Data frame received for 3\nI0907 07:52:45.148517 684 log.go:181] (0xc000aa8140) (3) Data frame handling\nI0907 07:52:45.148563 684 log.go:181] (0xc0002f80b0) Data frame received for 5\nI0907 07:52:45.148590 684 log.go:181] (0xc000d020a0) (5) Data frame handling\nI0907 07:52:45.148620 684 log.go:181] (0xc000d020a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 32728\nConnection to 172.18.0.15 32728 port [tcp/32728] succeeded!\nI0907 07:52:45.148802 684 log.go:181] (0xc0002f80b0) Data frame received for 5\nI0907 07:52:45.148826 684 log.go:181] (0xc000d020a0) (5) Data frame handling\nI0907 07:52:45.150465 684 log.go:181] (0xc0002f80b0) Data frame received for 1\nI0907 07:52:45.150501 684 log.go:181] (0xc000d02000) (1) Data frame handling\nI0907 07:52:45.150526 684 log.go:181] (0xc000d02000) (1) Data frame sent\nI0907 07:52:45.150549 684 log.go:181] (0xc0002f80b0) (0xc000d02000) Stream removed, broadcasting: 1\nI0907 07:52:45.150578 684 log.go:181] (0xc0002f80b0) Go away received\nI0907 07:52:45.151036 684 log.go:181] (0xc0002f80b0) (0xc000d02000) Stream removed, broadcasting: 1\nI0907 07:52:45.151056 684 log.go:181] (0xc0002f80b0) (0xc000aa8140) Stream removed, broadcasting: 3\nI0907 07:52:45.151066 684 log.go:181] (0xc0002f80b0) (0xc000d020a0) Stream removed, broadcasting: 5\n" Sep 7 07:52:45.154: INFO: stdout: "" Sep 7 07:52:45.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32728' Sep 7 07:52:45.358: INFO: stderr: "I0907 07:52:45.287802 702 log.go:181] (0xc000263600) (0xc000d845a0) Create stream\nI0907 07:52:45.287853 702 log.go:181] (0xc000263600) (0xc000d845a0) Stream added, broadcasting: 1\nI0907 07:52:45.293494 702 log.go:181] (0xc000263600) Reply frame received for 1\nI0907 07:52:45.293525 702 log.go:181] (0xc000263600) (0xc000d84000) Create stream\nI0907 07:52:45.293535 702 log.go:181] (0xc000263600) (0xc000d84000) Stream added, broadcasting: 3\nI0907 07:52:45.294582 702 log.go:181] (0xc000263600) Reply frame received for 3\nI0907 07:52:45.294616 702 log.go:181] (0xc000263600) (0xc0006b46e0) Create stream\nI0907 07:52:45.294627 702 log.go:181] (0xc000263600) (0xc0006b46e0) Stream added, broadcasting: 5\nI0907 07:52:45.295600 702 log.go:181] (0xc000263600) Reply frame received for 5\nI0907 07:52:45.351486 702 log.go:181] (0xc000263600) Data frame received for 5\nI0907 07:52:45.351520 702 log.go:181] (0xc0006b46e0) (5) Data frame handling\nI0907 07:52:45.351529 702 log.go:181] (0xc0006b46e0) (5) Data frame sent\nI0907 07:52:45.351536 702 log.go:181] (0xc000263600) Data frame received for 5\nI0907 07:52:45.351541 702 log.go:181] (0xc0006b46e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32728\nConnection to 172.18.0.14 32728 port [tcp/32728] succeeded!\nI0907 07:52:45.351560 702 log.go:181] (0xc000263600) Data frame received for 3\nI0907 07:52:45.351565 702 log.go:181] (0xc000d84000) (3) Data frame handling\nI0907 07:52:45.353234 702 log.go:181] (0xc000263600) Data frame received for 1\nI0907 07:52:45.353260 702 log.go:181] (0xc000d845a0) (1) Data frame handling\nI0907 07:52:45.353282 702 log.go:181] (0xc000d845a0) (1) Data frame sent\nI0907 07:52:45.353302 702 log.go:181] (0xc000263600) (0xc000d845a0) Stream removed, broadcasting: 1\nI0907 07:52:45.353322 702 log.go:181] (0xc000263600) Go away received\nI0907 07:52:45.353708 702 log.go:181] (0xc000263600) (0xc000d845a0) Stream removed, broadcasting: 1\nI0907 07:52:45.353723 702 log.go:181] (0xc000263600) (0xc000d84000) Stream removed, broadcasting: 3\nI0907 07:52:45.353729 702 log.go:181] (0xc000263600) (0xc0006b46e0) Stream removed, broadcasting: 5\n" Sep 7 07:52:45.358: INFO: stdout: "" Sep 7 07:52:45.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:32728/ ; done' Sep 7 07:52:45.665: INFO: stderr: "I0907 07:52:45.488475 720 log.go:181] (0xc000948fd0) (0xc00044cdc0) Create stream\nI0907 07:52:45.488516 720 log.go:181] (0xc000948fd0) (0xc00044cdc0) Stream added, broadcasting: 1\nI0907 07:52:45.493185 720 log.go:181] (0xc000948fd0) Reply frame received for 1\nI0907 07:52:45.493234 720 log.go:181] (0xc000948fd0) (0xc000c0a000) Create stream\nI0907 07:52:45.493249 720 log.go:181] (0xc000948fd0) (0xc000c0a000) Stream added, broadcasting: 3\nI0907 07:52:45.494425 720 log.go:181] (0xc000948fd0) Reply frame received for 3\nI0907 07:52:45.494502 720 log.go:181] (0xc000948fd0) (0xc000c0a0a0) Create stream\nI0907 07:52:45.494518 720 log.go:181] (0xc000948fd0) (0xc000c0a0a0) Stream added, broadcasting: 5\nI0907 07:52:45.495708 720 log.go:181] (0xc000948fd0) Reply frame received for 5\nI0907 07:52:45.557610 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.557650 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.557667 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.557690 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.557701 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.557719 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.561285 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.561330 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.561371 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.561749 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.561764 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.561781 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.561809 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.561830 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.561857 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\nI0907 07:52:45.566370 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.566392 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.566403 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.566919 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.566935 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.566945 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.566997 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.567016 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.567026 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.573831 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.573852 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.573863 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.574326 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.574364 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.574388 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.574411 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.574431 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.574456 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.580370 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.580393 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.580413 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.581092 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.581125 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.581147 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.581324 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.581342 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.581357 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.585606 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.585628 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.585657 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.586264 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.586289 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.586302 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.586327 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.586346 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.586357 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.590533 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.590556 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.590574 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.590973 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.591006 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.591027 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.591051 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.591074 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.591099 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.595903 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.595928 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.595945 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.597152 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.597188 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.597202 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.597218 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.597228 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.597239 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.601469 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.601497 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.601521 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.601971 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.601991 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.602007 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.602027 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.602037 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.602047 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.606149 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.606186 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.606210 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.606811 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.606827 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.606850 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.606870 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.606881 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.606898 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.614310 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.614336 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.614359 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.614778 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.614811 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.614826 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.614848 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.614859 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.614872 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.622472 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.622496 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.622517 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.623300 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.623318 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.623329 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\nI0907 07:52:45.623338 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.623346 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.623366 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\nI0907 07:52:45.623409 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.623439 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.623467 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.629628 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.629661 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.629702 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.630058 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.630084 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.630125 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.630149 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.630161 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.630172 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.636533 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.636555 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.636582 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.636988 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.637002 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.637008 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.637017 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.637037 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.637044 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.643431 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.643448 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.643466 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.643936 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.643951 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.643958 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.644118 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.644138 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.644145 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.649656 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.649667 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.649673 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.650455 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.650499 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.650521 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.650541 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.650552 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.650583 720 log.go:181] (0xc000c0a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.657506 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.657536 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.657558 720 log.go:181] (0xc000c0a000) (3) Data frame sent\nI0907 07:52:45.657887 720 log.go:181] (0xc000948fd0) Data frame received for 3\nI0907 07:52:45.657919 720 log.go:181] (0xc000c0a000) (3) Data frame handling\nI0907 07:52:45.657949 720 log.go:181] (0xc000948fd0) Data frame received for 5\nI0907 07:52:45.657964 720 log.go:181] (0xc000c0a0a0) (5) Data frame handling\nI0907 07:52:45.659949 720 log.go:181] (0xc000948fd0) Data frame received for 1\nI0907 07:52:45.659981 720 log.go:181] (0xc00044cdc0) (1) Data frame handling\nI0907 07:52:45.660099 720 log.go:181] (0xc00044cdc0) (1) Data frame sent\nI0907 07:52:45.660150 720 log.go:181] (0xc000948fd0) (0xc00044cdc0) Stream removed, broadcasting: 1\nI0907 07:52:45.660184 720 log.go:181] (0xc000948fd0) Go away received\nI0907 07:52:45.660648 720 log.go:181] (0xc000948fd0) (0xc00044cdc0) Stream removed, broadcasting: 1\nI0907 07:52:45.660676 720 log.go:181] (0xc000948fd0) (0xc000c0a000) Stream removed, broadcasting: 3\nI0907 07:52:45.660689 720 log.go:181] (0xc000948fd0) (0xc000c0a0a0) Stream removed, broadcasting: 5\n" Sep 7 07:52:45.666: INFO: stdout: "\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5\naffinity-nodeport-timeout-8xvx5" Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Received response from host: affinity-nodeport-timeout-8xvx5 Sep 7 07:52:45.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:32728/' Sep 7 07:52:45.878: INFO: stderr: "I0907 07:52:45.817778 738 log.go:181] (0xc0007da000) (0xc000c57b80) Create stream\nI0907 07:52:45.817860 738 log.go:181] (0xc0007da000) (0xc000c57b80) Stream added, broadcasting: 1\nI0907 07:52:45.821593 738 log.go:181] (0xc0007da000) Reply frame received for 1\nI0907 07:52:45.821672 738 log.go:181] (0xc0007da000) (0xc0007fa280) Create stream\nI0907 07:52:45.821695 738 log.go:181] (0xc0007da000) (0xc0007fa280) Stream added, broadcasting: 3\nI0907 07:52:45.823242 738 log.go:181] (0xc0007da000) Reply frame received for 3\nI0907 07:52:45.823286 738 log.go:181] (0xc0007da000) (0xc000c57c20) Create stream\nI0907 07:52:45.823306 738 log.go:181] (0xc0007da000) (0xc000c57c20) Stream added, broadcasting: 5\nI0907 07:52:45.824410 738 log.go:181] (0xc0007da000) Reply frame received for 5\nI0907 07:52:45.869227 738 log.go:181] (0xc0007da000) Data frame received for 5\nI0907 07:52:45.869331 738 log.go:181] (0xc000c57c20) (5) Data frame handling\nI0907 07:52:45.869365 738 log.go:181] (0xc000c57c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:52:45.872302 738 log.go:181] (0xc0007da000) Data frame received for 3\nI0907 07:52:45.872322 738 log.go:181] (0xc0007fa280) (3) Data frame handling\nI0907 07:52:45.872348 738 log.go:181] (0xc0007fa280) (3) Data frame sent\nI0907 07:52:45.872364 738 log.go:181] (0xc0007da000) Data frame received for 3\nI0907 07:52:45.872375 738 log.go:181] (0xc0007fa280) (3) Data frame handling\nI0907 07:52:45.872516 738 log.go:181] (0xc0007da000) Data frame received for 5\nI0907 07:52:45.872534 738 log.go:181] (0xc000c57c20) (5) Data frame handling\nI0907 07:52:45.874180 738 log.go:181] (0xc0007da000) Data frame received for 1\nI0907 07:52:45.874203 738 log.go:181] (0xc000c57b80) (1) Data frame handling\nI0907 07:52:45.874220 738 log.go:181] (0xc000c57b80) (1) Data frame sent\nI0907 07:52:45.874229 738 log.go:181] (0xc0007da000) (0xc000c57b80) Stream removed, broadcasting: 1\nI0907 07:52:45.874238 738 log.go:181] (0xc0007da000) Go away received\nI0907 07:52:45.874682 738 log.go:181] (0xc0007da000) (0xc000c57b80) Stream removed, broadcasting: 1\nI0907 07:52:45.874702 738 log.go:181] (0xc0007da000) (0xc0007fa280) Stream removed, broadcasting: 3\nI0907 07:52:45.874712 738 log.go:181] (0xc0007da000) (0xc000c57c20) Stream removed, broadcasting: 5\n" Sep 7 07:52:45.879: INFO: stdout: "affinity-nodeport-timeout-8xvx5" Sep 7 07:53:00.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:32728/' Sep 7 07:53:01.098: INFO: stderr: "I0907 07:53:01.013422 757 log.go:181] (0xc000238f20) (0xc000484000) Create stream\nI0907 07:53:01.013473 757 log.go:181] (0xc000238f20) (0xc000484000) Stream added, broadcasting: 1\nI0907 07:53:01.019843 757 log.go:181] (0xc000238f20) Reply frame received for 1\nI0907 07:53:01.019907 757 log.go:181] (0xc000238f20) (0xc000642000) Create stream\nI0907 07:53:01.019924 757 log.go:181] (0xc000238f20) (0xc000642000) Stream added, broadcasting: 3\nI0907 07:53:01.021222 757 log.go:181] (0xc000238f20) Reply frame received for 3\nI0907 07:53:01.021283 757 log.go:181] (0xc000238f20) (0xc000484b40) Create stream\nI0907 07:53:01.021307 757 log.go:181] (0xc000238f20) (0xc000484b40) Stream added, broadcasting: 5\nI0907 07:53:01.022500 757 log.go:181] (0xc000238f20) Reply frame received for 5\nI0907 07:53:01.086836 757 log.go:181] (0xc000238f20) Data frame received for 5\nI0907 07:53:01.086865 757 log.go:181] (0xc000484b40) (5) Data frame handling\nI0907 07:53:01.086887 757 log.go:181] (0xc000484b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:53:01.090920 757 log.go:181] (0xc000238f20) Data frame received for 3\nI0907 07:53:01.090950 757 log.go:181] (0xc000642000) (3) Data frame handling\nI0907 07:53:01.090972 757 log.go:181] (0xc000642000) (3) Data frame sent\nI0907 07:53:01.091491 757 log.go:181] (0xc000238f20) Data frame received for 3\nI0907 07:53:01.091534 757 log.go:181] (0xc000642000) (3) Data frame handling\nI0907 07:53:01.091697 757 log.go:181] (0xc000238f20) Data frame received for 5\nI0907 07:53:01.091730 757 log.go:181] (0xc000484b40) (5) Data frame handling\nI0907 07:53:01.093678 757 log.go:181] (0xc000238f20) Data frame received for 1\nI0907 07:53:01.093703 757 log.go:181] (0xc000484000) (1) Data frame handling\nI0907 07:53:01.093724 757 log.go:181] (0xc000484000) (1) Data frame sent\nI0907 07:53:01.093745 757 log.go:181] (0xc000238f20) (0xc000484000) Stream removed, broadcasting: 1\nI0907 07:53:01.093775 757 log.go:181] (0xc000238f20) Go away received\nI0907 07:53:01.094210 757 log.go:181] (0xc000238f20) (0xc000484000) Stream removed, broadcasting: 1\nI0907 07:53:01.094234 757 log.go:181] (0xc000238f20) (0xc000642000) Stream removed, broadcasting: 3\nI0907 07:53:01.094247 757 log.go:181] (0xc000238f20) (0xc000484b40) Stream removed, broadcasting: 5\n" Sep 7 07:53:01.098: INFO: stdout: "affinity-nodeport-timeout-8xvx5" Sep 7 07:53:16.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2865 execpod-affinitytcfm2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:32728/' Sep 7 07:53:16.342: INFO: stderr: "I0907 07:53:16.236551 775 log.go:181] (0xc0007e7340) (0xc0007dca00) Create stream\nI0907 07:53:16.236598 775 log.go:181] (0xc0007e7340) (0xc0007dca00) Stream added, broadcasting: 1\nI0907 07:53:16.242101 775 log.go:181] (0xc0007e7340) Reply frame received for 1\nI0907 07:53:16.242138 775 log.go:181] (0xc0007e7340) (0xc0007dc000) Create stream\nI0907 07:53:16.242152 775 log.go:181] (0xc0007e7340) (0xc0007dc000) Stream added, broadcasting: 3\nI0907 07:53:16.243124 775 log.go:181] (0xc0007e7340) Reply frame received for 3\nI0907 07:53:16.243167 775 log.go:181] (0xc0007e7340) (0xc000a2bea0) Create stream\nI0907 07:53:16.243181 775 log.go:181] (0xc0007e7340) (0xc000a2bea0) Stream added, broadcasting: 5\nI0907 07:53:16.244335 775 log.go:181] (0xc0007e7340) Reply frame received for 5\nI0907 07:53:16.329431 775 log.go:181] (0xc0007e7340) Data frame received for 5\nI0907 07:53:16.329477 775 log.go:181] (0xc000a2bea0) (5) Data frame handling\nI0907 07:53:16.329537 775 log.go:181] (0xc000a2bea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32728/\nI0907 07:53:16.335214 775 log.go:181] (0xc0007e7340) Data frame received for 3\nI0907 07:53:16.335250 775 log.go:181] (0xc0007dc000) (3) Data frame handling\nI0907 07:53:16.335270 775 log.go:181] (0xc0007dc000) (3) Data frame sent\nI0907 07:53:16.336060 775 log.go:181] (0xc0007e7340) Data frame received for 5\nI0907 07:53:16.336090 775 log.go:181] (0xc000a2bea0) (5) Data frame handling\nI0907 07:53:16.336375 775 log.go:181] (0xc0007e7340) Data frame received for 3\nI0907 07:53:16.336390 775 log.go:181] (0xc0007dc000) (3) Data frame handling\nI0907 07:53:16.338029 775 log.go:181] (0xc0007e7340) Data frame received for 1\nI0907 07:53:16.338058 775 log.go:181] (0xc0007dca00) (1) Data frame handling\nI0907 07:53:16.338078 775 log.go:181] (0xc0007dca00) (1) Data frame sent\nI0907 07:53:16.338112 775 log.go:181] (0xc0007e7340) (0xc0007dca00) Stream removed, broadcasting: 1\nI0907 07:53:16.338337 775 log.go:181] (0xc0007e7340) Go away received\nI0907 07:53:16.338512 775 log.go:181] (0xc0007e7340) (0xc0007dca00) Stream removed, broadcasting: 1\nI0907 07:53:16.338531 775 log.go:181] (0xc0007e7340) (0xc0007dc000) Stream removed, broadcasting: 3\nI0907 07:53:16.338539 775 log.go:181] (0xc0007e7340) (0xc000a2bea0) Stream removed, broadcasting: 5\n" Sep 7 07:53:16.342: INFO: stdout: "affinity-nodeport-timeout-vf7qf" Sep 7 07:53:16.342: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2865, will wait for the garbage collector to delete the pods Sep 7 07:53:16.425: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 11.965373ms Sep 7 07:53:17.026: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.241197ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:53:32.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2865" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:71.457 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":47,"skipped":607,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:53:32.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 7 07:53:33.090: INFO: created pod pod-service-account-defaultsa Sep 7 07:53:33.090: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 7 07:53:33.094: INFO: created pod pod-service-account-mountsa Sep 7 07:53:33.094: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 7 07:53:33.130: INFO: created pod pod-service-account-nomountsa Sep 7 07:53:33.130: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 7 07:53:33.152: INFO: created pod pod-service-account-defaultsa-mountspec Sep 7 07:53:33.152: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 7 07:53:33.214: INFO: created pod pod-service-account-mountsa-mountspec Sep 7 07:53:33.214: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 7 07:53:33.248: INFO: created pod pod-service-account-nomountsa-mountspec Sep 7 07:53:33.248: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 7 07:53:33.294: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 7 07:53:33.294: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 7 07:53:33.339: INFO: created pod pod-service-account-mountsa-nomountspec Sep 7 07:53:33.339: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 7 07:53:33.377: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 7 07:53:33.377: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:53:33.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-591" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":48,"skipped":612,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:53:33.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 7 07:53:33.561: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 7 07:53:33.597: INFO: Waiting for terminating namespaces to be deleted... Sep 7 07:53:33.600: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 7 07:53:33.604: INFO: kindnet-d72xf from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.604: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 07:53:33.604: INFO: kube-proxy-64mm6 from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.604: INFO: Container kube-proxy ready: true, restart count 0 Sep 7 07:53:33.604: INFO: pod-service-account-defaultsa from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.604: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.604: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.604: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.604: INFO: pod-service-account-nomountsa from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.604: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.604: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.604: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.604: INFO: pod-service-account-nomountsa-nomountspec from svcaccounts-591 started at (0 container statuses recorded) Sep 7 07:53:33.604: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 7 07:53:33.608: INFO: kindnet-dktmm from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.608: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 07:53:33.608: INFO: kube-proxy-b55gf from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.608: INFO: Container kube-proxy ready: true, restart count 0 Sep 7 07:53:33.608: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.608: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.608: INFO: pod-service-account-defaultsa-nomountspec from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.608: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.608: INFO: pod-service-account-mountsa from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.609: INFO: Container token-test ready: false, restart count 0 Sep 7 07:53:33.609: INFO: pod-service-account-mountsa-mountspec from svcaccounts-591 started at 2020-09-07 07:53:33 +0000 UTC (1 container statuses recorded) Sep 7 07:53:33.609: INFO: Container token-test ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16327123313f2ea6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:53:34.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6231" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":49,"skipped":614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:53:34.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 7 07:53:46.796: INFO: Successfully updated pod "annotationupdatede02b9cc-b832-4513-ab7d-256772128b8c" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:53:48.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5721" for this suite. • [SLOW TEST:14.199 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:53:48.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 07:53:49.570: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 07:53:51.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062029, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062029, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062029, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062029, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 07:53:54.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:53:55.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8838" for this suite. STEP: Destroying namespace "webhook-8838-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.371 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":51,"skipped":676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:53:56.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 07:53:56.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54" in namespace "downward-api-8302" to be "Succeeded or Failed" Sep 7 07:53:56.317: INFO: Pod "downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280931ms Sep 7 07:53:58.321: INFO: Pod "downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008422315s Sep 7 07:54:00.326: INFO: Pod "downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013358773s STEP: Saw pod success Sep 7 07:54:00.326: INFO: Pod "downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54" satisfied condition "Succeeded or Failed" Sep 7 07:54:00.329: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54 container client-container: STEP: delete the pod Sep 7 07:54:00.360: INFO: Waiting for pod downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54 to disappear Sep 7 07:54:00.383: INFO: Pod downwardapi-volume-b0b2b253-29f3-4b66-8171-e9caf886ac54 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:54:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8302" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":702,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:54:00.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:56:00.484: INFO: Deleting pod "var-expansion-8179917b-b5bd-49f2-85ca-b0d7d89e7c54" in namespace "var-expansion-6108" Sep 7 07:56:00.488: INFO: Wait up to 5m0s for pod "var-expansion-8179917b-b5bd-49f2-85ca-b0d7d89e7c54" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:56:04.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6108" for this suite. • [SLOW TEST:124.198 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":53,"skipped":708,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:56:04.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-4644 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4644 STEP: Deleting pre-stop pod Sep 7 07:56:17.736: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:56:17.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4644" for this suite. • [SLOW TEST:13.200 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":54,"skipped":709,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:56:17.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 07:56:17.871: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:56:19.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9319" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":55,"skipped":725,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:56:19.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:56:24.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7712" for this suite. • [SLOW TEST:5.241 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":56,"skipped":729,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:56:24.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0907 07:56:34.557592 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 7 07:57:36.575: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:57:36.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2745" for this suite. • [SLOW TEST:72.208 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":57,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:57:36.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-p2bp STEP: Creating a pod to test atomic-volume-subpath Sep 7 07:57:36.712: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p2bp" in namespace "subpath-900" to be "Succeeded or Failed" Sep 7 07:57:36.757: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Pending", Reason="", readiness=false. Elapsed: 45.342396ms Sep 7 07:57:38.762: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050125546s Sep 7 07:57:40.767: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 4.054440994s Sep 7 07:57:42.772: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 6.059447548s Sep 7 07:57:44.776: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 8.064140532s Sep 7 07:57:46.780: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 10.068316538s Sep 7 07:57:48.785: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 12.072619621s Sep 7 07:57:50.789: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 14.077317821s Sep 7 07:57:52.794: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 16.082054024s Sep 7 07:57:54.799: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 18.086632504s Sep 7 07:57:56.805: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 20.092700483s Sep 7 07:57:58.810: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Running", Reason="", readiness=true. Elapsed: 22.097741502s Sep 7 07:58:00.843: INFO: Pod "pod-subpath-test-configmap-p2bp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.130628106s STEP: Saw pod success Sep 7 07:58:00.843: INFO: Pod "pod-subpath-test-configmap-p2bp" satisfied condition "Succeeded or Failed" Sep 7 07:58:00.871: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-p2bp container test-container-subpath-configmap-p2bp: STEP: delete the pod Sep 7 07:58:00.912: INFO: Waiting for pod pod-subpath-test-configmap-p2bp to disappear Sep 7 07:58:00.928: INFO: Pod pod-subpath-test-configmap-p2bp no longer exists STEP: Deleting pod pod-subpath-test-configmap-p2bp Sep 7 07:58:00.928: INFO: Deleting pod "pod-subpath-test-configmap-p2bp" in namespace "subpath-900" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:00.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-900" for this suite. • [SLOW TEST:24.354 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":58,"skipped":752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:00.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 7 07:58:09.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 7 07:58:09.315: INFO: Pod pod-with-poststart-exec-hook still exists Sep 7 07:58:11.316: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 7 07:58:11.320: INFO: Pod pod-with-poststart-exec-hook still exists Sep 7 07:58:13.316: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 7 07:58:13.345: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:13.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3222" for this suite. • [SLOW TEST:12.415 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":800,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:13.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-25f55cfd-946f-4d18-bb29-1945af580b98 STEP: Creating a pod to test consume secrets Sep 7 07:58:13.413: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab" in namespace "projected-1147" to be "Succeeded or Failed" Sep 7 07:58:13.426: INFO: Pod "pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab": Phase="Pending", Reason="", readiness=false. Elapsed: 13.074123ms Sep 7 07:58:15.430: INFO: Pod "pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017016044s Sep 7 07:58:17.459: INFO: Pod "pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045298411s STEP: Saw pod success Sep 7 07:58:17.459: INFO: Pod "pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab" satisfied condition "Succeeded or Failed" Sep 7 07:58:17.462: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab container projected-secret-volume-test: STEP: delete the pod Sep 7 07:58:17.511: INFO: Waiting for pod pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab to disappear Sep 7 07:58:17.527: INFO: Pod pod-projected-secrets-254b32e4-de77-4166-9140-d274f2e31dab no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:17.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1147" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":818,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:17.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-5cad7566-da8f-4d10-8595-b769db2541eb STEP: Creating secret with name secret-projected-all-test-volume-4df8423d-1f2b-4d90-b8ae-bab9273de828 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 7 07:58:17.656: INFO: Waiting up to 5m0s for pod "projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9" in namespace "projected-4550" to be "Succeeded or Failed" Sep 7 07:58:17.672: INFO: Pod "projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.127601ms Sep 7 07:58:19.675: INFO: Pod "projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01829758s Sep 7 07:58:21.679: INFO: Pod "projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.02269088s Sep 7 07:58:23.686: INFO: Pod "projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02913209s STEP: Saw pod success Sep 7 07:58:23.686: INFO: Pod "projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9" satisfied condition "Succeeded or Failed" Sep 7 07:58:23.688: INFO: Trying to get logs from node latest-worker2 pod projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9 container projected-all-volume-test: STEP: delete the pod Sep 7 07:58:23.730: INFO: Waiting for pod projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9 to disappear Sep 7 07:58:23.747: INFO: Pod projected-volume-8f2a059c-5ed8-4423-81b0-51fd02c2b6f9 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:23.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4550" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":61,"skipped":832,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:23.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:23.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6805" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":62,"skipped":837,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:23.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 7 07:58:28.050: INFO: &Pod{ObjectMeta:{send-events-c36588e5-445b-4eee-a0f2-86cf9a82c9b9 events-7807 /api/v1/namespaces/events-7807/pods/send-events-c36588e5-445b-4eee-a0f2-86cf9a82c9b9 8a0bb70e-94d6-4a53-a903-68169fdfe906 276210 0 2020-09-07 07:58:24 +0000 UTC map[name:foo time:14825198] map[] [] [] [{e2e.test Update v1 2020-09-07 07:58:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 07:58:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hhhg8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hhhg8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hhhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:58:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 07:58:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.31,StartTime:2020-09-07 07:58:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 07:58:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://82f7db3233a935999c616d8f2dedd023e7c9f5c6d97a004f7c20aad471fca7d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 7 07:58:30.077: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 7 07:58:32.081: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:32.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7807" for this suite. • [SLOW TEST:8.220 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":63,"skipped":843,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:32.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 07:58:45.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6583" for this suite. • [SLOW TEST:13.228 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":64,"skipped":854,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 07:58:45.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-9a3ff41a-d64f-4652-8004-9d3fe702c5f1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-9a3ff41a-d64f-4652-8004-9d3fe702c5f1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:00:17.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5531" for this suite. • [SLOW TEST:92.556 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":65,"skipped":859,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:00:17.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:00:18.004: INFO: Waiting up to 5m0s for pod "busybox-user-65534-dfbe3234-3648-421d-b0a9-20d25047958f" in namespace "security-context-test-6164" to be "Succeeded or Failed" Sep 7 08:00:18.019: INFO: Pod "busybox-user-65534-dfbe3234-3648-421d-b0a9-20d25047958f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.410052ms Sep 7 08:00:20.234: INFO: Pod "busybox-user-65534-dfbe3234-3648-421d-b0a9-20d25047958f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229366343s Sep 7 08:00:22.239: INFO: Pod "busybox-user-65534-dfbe3234-3648-421d-b0a9-20d25047958f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234178241s Sep 7 08:00:22.239: INFO: Pod "busybox-user-65534-dfbe3234-3648-421d-b0a9-20d25047958f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:00:22.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6164" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:00:22.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 7 08:00:22.396: INFO: Waiting up to 1m0s for all nodes to be ready Sep 7 08:01:22.414: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 7 08:01:22.429: INFO: Created pod: pod0-sched-preemption-low-priority Sep 7 08:01:22.502: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:01:36.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5724" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:74.505 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":67,"skipped":906,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:01:36.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 7 08:01:45.196: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 7 08:01:45.234: INFO: Pod pod-with-prestop-http-hook still exists Sep 7 08:01:47.234: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 7 08:01:47.239: INFO: Pod pod-with-prestop-http-hook still exists Sep 7 08:01:49.234: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 7 08:01:49.238: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:01:49.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-548" for this suite. • [SLOW TEST:12.508 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":68,"skipped":909,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:01:49.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 7 08:01:49.387: INFO: Waiting up to 1m0s for all nodes to be ready Sep 7 08:02:49.409: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 7 08:02:49.448: INFO: Created pod: pod0-sched-preemption-low-priority Sep 7 08:02:49.480: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:03.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4758" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:74.438 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":69,"skipped":918,"failed":0} [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:03.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:07.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8674" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:07.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:07.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1108" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":71,"skipped":941,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:07.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 7 08:03:08.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config cluster-info' Sep 7 08:03:12.488: INFO: stderr: "" Sep 7 08:03:12.488: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43335\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43335/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:12.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3113" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":72,"skipped":947,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:12.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-adb7fb0c-95f8-4c58-bb67-55217156db2c STEP: Creating a pod to test consume configMaps Sep 7 08:03:12.642: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722" in namespace "projected-534" to be "Succeeded or Failed" Sep 7 08:03:12.645: INFO: Pod "pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.820174ms Sep 7 08:03:14.648: INFO: Pod "pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006136401s Sep 7 08:03:16.657: INFO: Pod "pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014802499s STEP: Saw pod success Sep 7 08:03:16.657: INFO: Pod "pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722" satisfied condition "Succeeded or Failed" Sep 7 08:03:16.660: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722 container projected-configmap-volume-test: STEP: delete the pod Sep 7 08:03:16.693: INFO: Waiting for pod pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722 to disappear Sep 7 08:03:16.707: INFO: Pod pod-projected-configmaps-67b53eba-77fb-4de9-84a6-0ecef30b4722 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:16.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-534" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":73,"skipped":959,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:16.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:03:16.783: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 7 08:03:18.828: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:19.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3931" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":74,"skipped":974,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:20.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Sep 7 08:03:20.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-3096 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 7 08:03:20.787: INFO: stderr: "" Sep 7 08:03:20.787: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 7 08:03:20.788: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 7 08:03:20.788: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3096" to be "running and ready, or succeeded" Sep 7 08:03:21.079: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 291.57681ms Sep 7 08:03:23.083: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295574643s Sep 7 08:03:25.088: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.300580484s Sep 7 08:03:25.088: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 7 08:03:25.088: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 7 08:03:25.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3096' Sep 7 08:03:25.217: INFO: stderr: "" Sep 7 08:03:25.217: INFO: stdout: "I0907 08:03:23.709709 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/mfgm 328\nI0907 08:03:23.909860 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/6w7h 326\nI0907 08:03:24.109937 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/k4b 255\nI0907 08:03:24.309885 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/j8p 459\nI0907 08:03:24.509886 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/pjr 526\nI0907 08:03:24.709921 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/nrf 593\nI0907 08:03:24.909903 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/ww5n 382\nI0907 08:03:25.109854 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/lv25 286\n" STEP: limiting log lines Sep 7 08:03:25.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3096 --tail=1' Sep 7 08:03:25.351: INFO: stderr: "" Sep 7 08:03:25.351: INFO: stdout: "I0907 08:03:25.309885 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/fcv 470\n" Sep 7 08:03:25.351: INFO: got output "I0907 08:03:25.309885 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/fcv 470\n" STEP: limiting log bytes Sep 7 08:03:25.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3096 --limit-bytes=1' Sep 7 08:03:25.664: INFO: stderr: "" Sep 7 08:03:25.664: INFO: stdout: "I" Sep 7 08:03:25.664: INFO: got output "I" STEP: exposing timestamps Sep 7 08:03:25.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3096 --tail=1 --timestamps' Sep 7 08:03:25.781: INFO: stderr: "" Sep 7 08:03:25.781: INFO: stdout: "2020-09-07T08:03:25.710077383Z I0907 08:03:25.709891 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/46f 509\n" Sep 7 08:03:25.781: INFO: got output "2020-09-07T08:03:25.710077383Z I0907 08:03:25.709891 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/46f 509\n" STEP: restricting to a time range Sep 7 08:03:28.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3096 --since=1s' Sep 7 08:03:28.384: INFO: stderr: "" Sep 7 08:03:28.384: INFO: stdout: "I0907 08:03:27.509911 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ffn 413\nI0907 08:03:27.709925 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/d88 534\nI0907 08:03:27.909789 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/6bmz 342\nI0907 08:03:28.109911 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/2h6 259\nI0907 08:03:28.309905 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/pfj2 422\n" Sep 7 08:03:28.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3096 --since=24h' Sep 7 08:03:28.494: INFO: stderr: "" Sep 7 08:03:28.494: INFO: stdout: "I0907 08:03:23.709709 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/mfgm 328\nI0907 08:03:23.909860 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/6w7h 326\nI0907 08:03:24.109937 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/k4b 255\nI0907 08:03:24.309885 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/j8p 459\nI0907 08:03:24.509886 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/pjr 526\nI0907 08:03:24.709921 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/nrf 593\nI0907 08:03:24.909903 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/ww5n 382\nI0907 08:03:25.109854 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/lv25 286\nI0907 08:03:25.309885 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/fcv 470\nI0907 08:03:25.509896 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5gd8 340\nI0907 08:03:25.709891 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/46f 509\nI0907 08:03:25.909887 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/vsbb 444\nI0907 08:03:26.109864 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/vzb 478\nI0907 08:03:26.309898 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/l9x 588\nI0907 08:03:26.509869 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/p9n2 257\nI0907 08:03:26.709917 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/vt5p 433\nI0907 08:03:26.909878 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/lxb 468\nI0907 08:03:27.109946 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/drt 212\nI0907 08:03:27.309806 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/5k5q 301\nI0907 08:03:27.509911 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ffn 413\nI0907 08:03:27.709925 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/d88 534\nI0907 08:03:27.909789 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/6bmz 342\nI0907 08:03:28.109911 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/2h6 259\nI0907 08:03:28.309905 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/pfj2 422\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Sep 7 08:03:28.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3096' Sep 7 08:03:30.642: INFO: stderr: "" Sep 7 08:03:30.642: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:30.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3096" for this suite. • [SLOW TEST:10.643 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":75,"skipped":981,"failed":0} SSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:30.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 7 08:03:30.772: INFO: starting watch STEP: patching STEP: updating Sep 7 08:03:30.781: INFO: waiting for watch events with expected annotations Sep 7 08:03:30.781: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:30.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-2395" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":76,"skipped":989,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:30.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:03:30.892: INFO: Creating deployment "test-recreate-deployment" Sep 7 08:03:30.907: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 7 08:03:30.920: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 7 08:03:32.927: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 7 08:03:32.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062610, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062610, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062611, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062610, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:03:34.947: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 7 08:03:34.956: INFO: Updating deployment test-recreate-deployment Sep 7 08:03:34.956: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 7 08:03:35.592: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5698 /apis/apps/v1/namespaces/deployment-5698/deployments/test-recreate-deployment 7fe760eb-4f90-496a-b718-981904016c2a 277606 2 2020-09-07 08:03:30 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-07 08:03:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 08:03:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006265a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-07 08:03:35 +0000 UTC,LastTransitionTime:2020-09-07 08:03:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-09-07 08:03:35 +0000 UTC,LastTransitionTime:2020-09-07 08:03:30 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 7 08:03:35.596: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-5698 /apis/apps/v1/namespaces/deployment-5698/replicasets/test-recreate-deployment-f79dd4667 d3c41b95-3ca7-4ddc-8ca1-007109bd2fdc 277605 1 2020-09-07 08:03:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 7fe760eb-4f90-496a-b718-981904016c2a 0xc0061fd600 0xc0061fd601}] [] [{kube-controller-manager Update apps/v1 2020-09-07 08:03:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7fe760eb-4f90-496a-b718-981904016c2a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0061fd678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:03:35.596: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 7 08:03:35.596: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-5698 /apis/apps/v1/namespaces/deployment-5698/replicasets/test-recreate-deployment-c96cf48f cbc9b9c0-1d1b-458a-a8a8-e1be79104b9a 277594 2 2020-09-07 08:03:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 7fe760eb-4f90-496a-b718-981904016c2a 0xc0061fd50f 0xc0061fd520}] [] [{kube-controller-manager Update apps/v1 2020-09-07 08:03:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7fe760eb-4f90-496a-b718-981904016c2a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0061fd598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:03:35.622: INFO: Pod "test-recreate-deployment-f79dd4667-ckb28" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-ckb28 test-recreate-deployment-f79dd4667- deployment-5698 /api/v1/namespaces/deployment-5698/pods/test-recreate-deployment-f79dd4667-ckb28 992d286f-1d8c-4c9d-af04-18297ea46883 277604 0 2020-09-07 08:03:35 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 d3c41b95-3ca7-4ddc-8ca1-007109bd2fdc 0xc0062bf890 0xc0062bf891}] [] [{kube-controller-manager Update v1 2020-09-07 08:03:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3c41b95-3ca7-4ddc-8ca1-007109bd2fdc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:03:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5vn25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5vn25,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5vn25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:03:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:03:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:03:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:03:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-07 08:03:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:35.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5698" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":77,"skipped":998,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:35.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 7 08:03:35.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config api-versions' Sep 7 08:03:36.258: INFO: stderr: "" Sep 7 08:03:36.258: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:36.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7913" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":78,"skipped":1019,"failed":0} ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:36.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-9987 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[] Sep 7 08:03:36.650: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9987 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[pod1:[80]] Sep 7 08:03:40.750: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-9987 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[pod1:[80] pod2:[80]] Sep 7 08:03:43.891: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-9987 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[pod2:[80]] Sep 7 08:03:44.049: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-9987 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9987 to expose endpoints map[] Sep 7 08:03:45.104: INFO: successfully validated that service endpoint-test2 in namespace services-9987 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:45.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9987" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.760 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":79,"skipped":1019,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:45.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-90d89448-c71e-492a-a23f-119c9d007ec7 STEP: Creating a pod to test consume secrets Sep 7 08:03:45.283: INFO: Waiting up to 5m0s for pod "pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73" in namespace "secrets-4558" to be "Succeeded or Failed" Sep 7 08:03:45.297: INFO: Pod "pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217309ms Sep 7 08:03:47.302: INFO: Pod "pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019197824s Sep 7 08:03:49.306: INFO: Pod "pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023865709s STEP: Saw pod success Sep 7 08:03:49.307: INFO: Pod "pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73" satisfied condition "Succeeded or Failed" Sep 7 08:03:49.310: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73 container secret-volume-test: STEP: delete the pod Sep 7 08:03:49.357: INFO: Waiting for pod pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73 to disappear Sep 7 08:03:49.393: INFO: Pod pod-secrets-b4e0d50e-1de0-43d7-8370-f0e186294a73 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:49.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4558" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1039,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:49.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:03:49.489: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 7 08:03:52.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3736 create -f -' Sep 7 08:03:56.805: INFO: stderr: "" Sep 7 08:03:56.805: INFO: stdout: "e2e-test-crd-publish-openapi-2154-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 7 08:03:56.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3736 delete e2e-test-crd-publish-openapi-2154-crds test-cr' Sep 7 08:03:56.914: INFO: stderr: "" Sep 7 08:03:56.914: INFO: stdout: "e2e-test-crd-publish-openapi-2154-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 7 08:03:56.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3736 apply -f -' Sep 7 08:03:57.278: INFO: stderr: "" Sep 7 08:03:57.278: INFO: stdout: "e2e-test-crd-publish-openapi-2154-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 7 08:03:57.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3736 delete e2e-test-crd-publish-openapi-2154-crds test-cr' Sep 7 08:03:57.386: INFO: stderr: "" Sep 7 08:03:57.386: INFO: stdout: "e2e-test-crd-publish-openapi-2154-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 7 08:03:57.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2154-crds' Sep 7 08:03:57.721: INFO: stderr: "" Sep 7 08:03:57.721: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2154-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:03:59.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3736" for this suite. • [SLOW TEST:10.240 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":81,"skipped":1050,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:03:59.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:04:31.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-503" for this suite. • [SLOW TEST:31.679 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1062,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:04:31.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-0b7bbe9c-5383-4058-96e8-c041d46af16e STEP: Creating secret with name s-test-opt-upd-2d9feebf-d1e0-487e-945d-196b685868c9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0b7bbe9c-5383-4058-96e8-c041d46af16e STEP: Updating secret s-test-opt-upd-2d9feebf-d1e0-487e-945d-196b685868c9 STEP: Creating secret with name s-test-opt-create-46b70111-5f47-44b3-930a-1b473e235b47 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:04:39.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3834" for this suite. • [SLOW TEST:8.248 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1069,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:04:39.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:04:40.481: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:04:42.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062680, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062680, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062680, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735062680, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:04:46.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:04:46.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2454-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:04:47.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1676" for this suite. STEP: Destroying namespace "webhook-1676-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.584 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":84,"skipped":1073,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:04:48.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 7 08:04:52.361: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:04:52.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3402" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:04:52.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9ac19e9a-4cef-41e3-a0b7-43de6afdbc6a STEP: Creating a pod to test consume secrets Sep 7 08:04:52.547: INFO: Waiting up to 5m0s for pod "pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6" in namespace "secrets-6937" to be "Succeeded or Failed" Sep 7 08:04:52.552: INFO: Pod "pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.6639ms Sep 7 08:04:54.625: INFO: Pod "pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077839276s Sep 7 08:04:56.629: INFO: Pod "pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081496523s STEP: Saw pod success Sep 7 08:04:56.629: INFO: Pod "pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6" satisfied condition "Succeeded or Failed" Sep 7 08:04:56.634: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6 container secret-env-test: STEP: delete the pod Sep 7 08:04:56.721: INFO: Waiting for pod pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6 to disappear Sep 7 08:04:56.723: INFO: Pod pod-secrets-e909feae-5eca-42b6-bf47-0733368de8b6 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:04:56.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6937" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1106,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:04:56.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:05:12.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4693" for this suite. STEP: Destroying namespace "nsdeletetest-4822" for this suite. Sep 7 08:05:12.126: INFO: Namespace nsdeletetest-4822 was already deleted STEP: Destroying namespace "nsdeletetest-6428" for this suite. • [SLOW TEST:15.400 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":87,"skipped":1111,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:05:12.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3b0d131d-a94b-4737-923e-4935a183e7f5 STEP: Creating a pod to test consume configMaps Sep 7 08:05:12.252: INFO: Waiting up to 5m0s for pod "pod-configmaps-336798a5-834e-40de-b415-d7e99610d197" in namespace "configmap-7013" to be "Succeeded or Failed" Sep 7 08:05:12.264: INFO: Pod "pod-configmaps-336798a5-834e-40de-b415-d7e99610d197": Phase="Pending", Reason="", readiness=false. Elapsed: 11.530812ms Sep 7 08:05:14.270: INFO: Pod "pod-configmaps-336798a5-834e-40de-b415-d7e99610d197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017359225s Sep 7 08:05:16.274: INFO: Pod "pod-configmaps-336798a5-834e-40de-b415-d7e99610d197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022206723s STEP: Saw pod success Sep 7 08:05:16.275: INFO: Pod "pod-configmaps-336798a5-834e-40de-b415-d7e99610d197" satisfied condition "Succeeded or Failed" Sep 7 08:05:16.278: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-336798a5-834e-40de-b415-d7e99610d197 container configmap-volume-test: STEP: delete the pod Sep 7 08:05:16.309: INFO: Waiting for pod pod-configmaps-336798a5-834e-40de-b415-d7e99610d197 to disappear Sep 7 08:05:16.319: INFO: Pod pod-configmaps-336798a5-834e-40de-b415-d7e99610d197 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:05:16.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7013" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1112,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:05:16.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 7 08:05:16.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4221' Sep 7 08:05:16.746: INFO: stderr: "" Sep 7 08:05:16.746: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 7 08:05:16.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4221' Sep 7 08:05:16.859: INFO: stderr: "" Sep 7 08:05:16.859: INFO: stdout: "update-demo-nautilus-plvc4 update-demo-nautilus-wdc6g " Sep 7 08:05:16.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-plvc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:16.991: INFO: stderr: "" Sep 7 08:05:16.991: INFO: stdout: "" Sep 7 08:05:16.991: INFO: update-demo-nautilus-plvc4 is created but not running Sep 7 08:05:21.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4221' Sep 7 08:05:22.095: INFO: stderr: "" Sep 7 08:05:22.095: INFO: stdout: "update-demo-nautilus-plvc4 update-demo-nautilus-wdc6g " Sep 7 08:05:22.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-plvc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:22.189: INFO: stderr: "" Sep 7 08:05:22.189: INFO: stdout: "true" Sep 7 08:05:22.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-plvc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:22.286: INFO: stderr: "" Sep 7 08:05:22.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:05:22.287: INFO: validating pod update-demo-nautilus-plvc4 Sep 7 08:05:22.290: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:05:22.290: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:05:22.290: INFO: update-demo-nautilus-plvc4 is verified up and running Sep 7 08:05:22.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdc6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:22.393: INFO: stderr: "" Sep 7 08:05:22.393: INFO: stdout: "true" Sep 7 08:05:22.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdc6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:22.486: INFO: stderr: "" Sep 7 08:05:22.486: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:05:22.486: INFO: validating pod update-demo-nautilus-wdc6g Sep 7 08:05:22.490: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:05:22.490: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:05:22.490: INFO: update-demo-nautilus-wdc6g is verified up and running STEP: scaling down the replication controller Sep 7 08:05:22.492: INFO: scanned /root for discovery docs: Sep 7 08:05:22.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4221' Sep 7 08:05:23.635: INFO: stderr: "" Sep 7 08:05:23.635: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 7 08:05:23.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4221' Sep 7 08:05:23.735: INFO: stderr: "" Sep 7 08:05:23.735: INFO: stdout: "update-demo-nautilus-plvc4 update-demo-nautilus-wdc6g " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 7 08:05:28.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4221' Sep 7 08:05:28.832: INFO: stderr: "" Sep 7 08:05:28.832: INFO: stdout: "update-demo-nautilus-wdc6g " Sep 7 08:05:28.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdc6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:28.938: INFO: stderr: "" Sep 7 08:05:28.938: INFO: stdout: "true" Sep 7 08:05:28.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdc6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:29.047: INFO: stderr: "" Sep 7 08:05:29.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:05:29.047: INFO: validating pod update-demo-nautilus-wdc6g Sep 7 08:05:29.050: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:05:29.050: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:05:29.050: INFO: update-demo-nautilus-wdc6g is verified up and running STEP: scaling up the replication controller Sep 7 08:05:29.054: INFO: scanned /root for discovery docs: Sep 7 08:05:29.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4221' Sep 7 08:05:30.188: INFO: stderr: "" Sep 7 08:05:30.188: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 7 08:05:30.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4221' Sep 7 08:05:30.350: INFO: stderr: "" Sep 7 08:05:30.350: INFO: stdout: "update-demo-nautilus-tczzc update-demo-nautilus-wdc6g " Sep 7 08:05:30.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tczzc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:30.465: INFO: stderr: "" Sep 7 08:05:30.465: INFO: stdout: "" Sep 7 08:05:30.465: INFO: update-demo-nautilus-tczzc is created but not running Sep 7 08:05:35.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4221' Sep 7 08:05:35.572: INFO: stderr: "" Sep 7 08:05:35.572: INFO: stdout: "update-demo-nautilus-tczzc update-demo-nautilus-wdc6g " Sep 7 08:05:35.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tczzc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:35.667: INFO: stderr: "" Sep 7 08:05:35.667: INFO: stdout: "true" Sep 7 08:05:35.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tczzc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:35.770: INFO: stderr: "" Sep 7 08:05:35.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:05:35.770: INFO: validating pod update-demo-nautilus-tczzc Sep 7 08:05:35.774: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:05:35.775: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:05:35.775: INFO: update-demo-nautilus-tczzc is verified up and running Sep 7 08:05:35.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdc6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:35.874: INFO: stderr: "" Sep 7 08:05:35.874: INFO: stdout: "true" Sep 7 08:05:35.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wdc6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4221' Sep 7 08:05:35.981: INFO: stderr: "" Sep 7 08:05:35.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:05:35.982: INFO: validating pod update-demo-nautilus-wdc6g Sep 7 08:05:35.985: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:05:35.985: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:05:35.985: INFO: update-demo-nautilus-wdc6g is verified up and running STEP: using delete to clean up resources Sep 7 08:05:35.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4221' Sep 7 08:05:36.101: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:05:36.101: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 7 08:05:36.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4221' Sep 7 08:05:36.194: INFO: stderr: "No resources found in kubectl-4221 namespace.\n" Sep 7 08:05:36.194: INFO: stdout: "" Sep 7 08:05:36.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4221 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 7 08:05:36.295: INFO: stderr: "" Sep 7 08:05:36.295: INFO: stdout: "update-demo-nautilus-tczzc\nupdate-demo-nautilus-wdc6g\n" Sep 7 08:05:36.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4221' Sep 7 08:05:36.928: INFO: stderr: "No resources found in kubectl-4221 namespace.\n" Sep 7 08:05:36.929: INFO: stdout: "" Sep 7 08:05:36.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4221 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 7 08:05:37.075: INFO: stderr: "" Sep 7 08:05:37.075: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:05:37.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4221" for this suite. • [SLOW TEST:20.738 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":89,"skipped":1122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:05:37.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 7 08:05:37.293: INFO: Waiting up to 5m0s for pod "pod-893deb67-10a7-424a-9f58-79461e2070d9" in namespace "emptydir-2611" to be "Succeeded or Failed" Sep 7 08:05:37.356: INFO: Pod "pod-893deb67-10a7-424a-9f58-79461e2070d9": Phase="Pending", Reason="", readiness=false. Elapsed: 63.009973ms Sep 7 08:05:39.359: INFO: Pod "pod-893deb67-10a7-424a-9f58-79461e2070d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066353721s Sep 7 08:05:41.363: INFO: Pod "pod-893deb67-10a7-424a-9f58-79461e2070d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070112285s STEP: Saw pod success Sep 7 08:05:41.363: INFO: Pod "pod-893deb67-10a7-424a-9f58-79461e2070d9" satisfied condition "Succeeded or Failed" Sep 7 08:05:41.365: INFO: Trying to get logs from node latest-worker2 pod pod-893deb67-10a7-424a-9f58-79461e2070d9 container test-container: STEP: delete the pod Sep 7 08:05:41.404: INFO: Waiting for pod pod-893deb67-10a7-424a-9f58-79461e2070d9 to disappear Sep 7 08:05:41.420: INFO: Pod pod-893deb67-10a7-424a-9f58-79461e2070d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:05:41.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2611" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1149,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:05:41.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:05:41.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c" in namespace "downward-api-7227" to be "Succeeded or Failed" Sep 7 08:05:41.522: INFO: Pod "downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.746569ms Sep 7 08:05:43.526: INFO: Pod "downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012615971s Sep 7 08:05:45.530: INFO: Pod "downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017331947s STEP: Saw pod success Sep 7 08:05:45.530: INFO: Pod "downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c" satisfied condition "Succeeded or Failed" Sep 7 08:05:45.534: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c container client-container: STEP: delete the pod Sep 7 08:05:45.566: INFO: Waiting for pod downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c to disappear Sep 7 08:05:45.595: INFO: Pod downwardapi-volume-fbfd670e-c3ac-490f-9982-a698e314644c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:05:45.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7227" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1153,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:05:45.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:07:45.735: INFO: Deleting pod "var-expansion-4e26cd1a-4bb0-4910-afc0-d690b78846a7" in namespace "var-expansion-5802" Sep 7 08:07:45.740: INFO: Wait up to 5m0s for pod "var-expansion-4e26cd1a-4bb0-4910-afc0-d690b78846a7" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:07:47.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5802" for this suite. • [SLOW TEST:122.175 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":92,"skipped":1161,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:07:47.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 7 08:07:47.829: INFO: >>> kubeConfig: /root/.kube/config Sep 7 08:07:49.802: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:08:00.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-236" for this suite. • [SLOW TEST:12.859 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":93,"skipped":1171,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:08:00.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:08:06.879: INFO: DNS probes using dns-7530/dns-test-9cd03210-f0cd-45e6-89af-8ae61b04d5b4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:08:06.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7530" for this suite. • [SLOW TEST:6.312 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":94,"skipped":1193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:08:06.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 7 08:08:08.156: INFO: starting watch STEP: patching STEP: updating Sep 7 08:08:08.189: INFO: waiting for watch events with expected annotations Sep 7 08:08:08.189: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:08:08.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-9845" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":95,"skipped":1233,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:08:08.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 7 08:08:08.415: INFO: Waiting up to 1m0s for all nodes to be ready Sep 7 08:09:08.443: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:09:08.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 7 08:09:12.568: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:09:34.970: INFO: pods created so far: [1 1 1] Sep 7 08:09:34.970: INFO: length of pods created so far: 3 Sep 7 08:09:46.980: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:09:53.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7955" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:09:54.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7134" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:105.837 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":96,"skipped":1244,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:09:54.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 7 08:09:54.279: INFO: created test-pod-1 Sep 7 08:09:54.283: INFO: created test-pod-2 Sep 7 08:09:54.304: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:09:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7140" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":97,"skipped":1261,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:09:54.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0907 08:10:07.359429 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 7 08:11:09.388: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 7 08:11:09.388: INFO: Deleting pod "simpletest-rc-to-be-deleted-8skwm" in namespace "gc-8013" Sep 7 08:11:09.610: INFO: Deleting pod "simpletest-rc-to-be-deleted-bmhqq" in namespace "gc-8013" Sep 7 08:11:09.966: INFO: Deleting pod "simpletest-rc-to-be-deleted-bqcbh" in namespace "gc-8013" Sep 7 08:11:10.007: INFO: Deleting pod "simpletest-rc-to-be-deleted-lxcsd" in namespace "gc-8013" Sep 7 08:11:10.219: INFO: Deleting pod "simpletest-rc-to-be-deleted-qgd4z" in namespace "gc-8013" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:11:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8013" for this suite. • [SLOW TEST:75.979 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":98,"skipped":1276,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:11:10.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 7 08:11:11.158: INFO: Waiting up to 5m0s for pod "pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8" in namespace "emptydir-8862" to be "Succeeded or Failed" Sep 7 08:11:11.320: INFO: Pod "pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 161.728728ms Sep 7 08:11:13.324: INFO: Pod "pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165165336s Sep 7 08:11:15.328: INFO: Pod "pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8": Phase="Running", Reason="", readiness=true. Elapsed: 4.169790233s Sep 7 08:11:17.333: INFO: Pod "pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174598379s STEP: Saw pod success Sep 7 08:11:17.333: INFO: Pod "pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8" satisfied condition "Succeeded or Failed" Sep 7 08:11:17.336: INFO: Trying to get logs from node latest-worker pod pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8 container test-container: STEP: delete the pod Sep 7 08:11:17.434: INFO: Waiting for pod pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8 to disappear Sep 7 08:11:17.441: INFO: Pod pod-4e1d5b76-9fc0-4417-a447-f66939a15dd8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:11:17.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8862" for this suite. • [SLOW TEST:6.746 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1281,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:11:17.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:11:17.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279" in namespace "downward-api-497" to be "Succeeded or Failed" Sep 7 08:11:17.553: INFO: Pod "downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279": Phase="Pending", Reason="", readiness=false. Elapsed: 27.480672ms Sep 7 08:11:19.557: INFO: Pod "downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031691977s Sep 7 08:11:21.561: INFO: Pod "downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035310587s STEP: Saw pod success Sep 7 08:11:21.561: INFO: Pod "downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279" satisfied condition "Succeeded or Failed" Sep 7 08:11:21.563: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279 container client-container: STEP: delete the pod Sep 7 08:11:21.587: INFO: Waiting for pod downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279 to disappear Sep 7 08:11:21.591: INFO: Pod downwardapi-volume-1ab192e4-dc3d-4248-9941-407feb9a9279 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:11:21.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-497" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:11:21.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 7 08:11:21.690: INFO: Waiting up to 5m0s for pod "pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609" in namespace "emptydir-9627" to be "Succeeded or Failed" Sep 7 08:11:21.693: INFO: Pod "pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609": Phase="Pending", Reason="", readiness=false. Elapsed: 3.253951ms Sep 7 08:11:23.697: INFO: Pod "pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007364496s Sep 7 08:11:25.702: INFO: Pod "pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012281232s STEP: Saw pod success Sep 7 08:11:25.702: INFO: Pod "pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609" satisfied condition "Succeeded or Failed" Sep 7 08:11:25.705: INFO: Trying to get logs from node latest-worker pod pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609 container test-container: STEP: delete the pod Sep 7 08:11:25.752: INFO: Waiting for pod pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609 to disappear Sep 7 08:11:25.764: INFO: Pod pod-e0dfa9a2-7925-44c3-b3c1-ba8043e28609 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:11:25.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9627" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:11:25.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 7 08:11:25.878: INFO: created test-podtemplate-1 Sep 7 08:11:25.884: INFO: created test-podtemplate-2 Sep 7 08:11:25.889: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 7 08:11:25.896: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 7 08:11:25.944: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:11:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9773" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":102,"skipped":1412,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:11:25.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5028 Sep 7 08:11:30.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-5028 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 7 08:11:30.283: INFO: stderr: "I0907 08:11:30.185171 1540 log.go:181] (0xc000aab130) (0xc000aa28c0) Create stream\nI0907 08:11:30.185229 1540 log.go:181] (0xc000aab130) (0xc000aa28c0) Stream added, broadcasting: 1\nI0907 08:11:30.190947 1540 log.go:181] (0xc000aab130) Reply frame received for 1\nI0907 08:11:30.190998 1540 log.go:181] (0xc000aab130) (0xc000aa2000) Create stream\nI0907 08:11:30.191012 1540 log.go:181] (0xc000aab130) (0xc000aa2000) Stream added, broadcasting: 3\nI0907 08:11:30.192240 1540 log.go:181] (0xc000aab130) Reply frame received for 3\nI0907 08:11:30.192283 1540 log.go:181] (0xc000aab130) (0xc0007ae1e0) Create stream\nI0907 08:11:30.192297 1540 log.go:181] (0xc000aab130) (0xc0007ae1e0) Stream added, broadcasting: 5\nI0907 08:11:30.193305 1540 log.go:181] (0xc000aab130) Reply frame received for 5\nI0907 08:11:30.271045 1540 log.go:181] (0xc000aab130) Data frame received for 5\nI0907 08:11:30.271089 1540 log.go:181] (0xc0007ae1e0) (5) Data frame handling\nI0907 08:11:30.271114 1540 log.go:181] (0xc0007ae1e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0907 08:11:30.276786 1540 log.go:181] (0xc000aab130) Data frame received for 3\nI0907 08:11:30.276814 1540 log.go:181] (0xc000aa2000) (3) Data frame handling\nI0907 08:11:30.276832 1540 log.go:181] (0xc000aa2000) (3) Data frame sent\nI0907 08:11:30.277448 1540 log.go:181] (0xc000aab130) Data frame received for 5\nI0907 08:11:30.277470 1540 log.go:181] (0xc000aab130) Data frame received for 3\nI0907 08:11:30.277487 1540 log.go:181] (0xc000aa2000) (3) Data frame handling\nI0907 08:11:30.277504 1540 log.go:181] (0xc0007ae1e0) (5) Data frame handling\nI0907 08:11:30.279058 1540 log.go:181] (0xc000aab130) Data frame received for 1\nI0907 08:11:30.279085 1540 log.go:181] (0xc000aa28c0) (1) Data frame handling\nI0907 08:11:30.279106 1540 log.go:181] (0xc000aa28c0) (1) Data frame sent\nI0907 08:11:30.279196 1540 log.go:181] (0xc000aab130) (0xc000aa28c0) Stream removed, broadcasting: 1\nI0907 08:11:30.279249 1540 log.go:181] (0xc000aab130) Go away received\nI0907 08:11:30.279501 1540 log.go:181] (0xc000aab130) (0xc000aa28c0) Stream removed, broadcasting: 1\nI0907 08:11:30.279515 1540 log.go:181] (0xc000aab130) (0xc000aa2000) Stream removed, broadcasting: 3\nI0907 08:11:30.279521 1540 log.go:181] (0xc000aab130) (0xc0007ae1e0) Stream removed, broadcasting: 5\n" Sep 7 08:11:30.283: INFO: stdout: "iptables" Sep 7 08:11:30.283: INFO: proxyMode: iptables Sep 7 08:11:30.288: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 08:11:30.306: INFO: Pod kube-proxy-mode-detector still exists Sep 7 08:11:32.307: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 08:11:32.337: INFO: Pod kube-proxy-mode-detector still exists Sep 7 08:11:34.307: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 7 08:11:34.312: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5028 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5028 I0907 08:11:34.428550 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5028, replica count: 3 I0907 08:11:37.478977 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:11:40.479203 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:11:40.486: INFO: Creating new exec pod Sep 7 08:11:45.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-5028 execpod-affinity2zrzg -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 7 08:11:45.743: INFO: stderr: "I0907 08:11:45.645837 1558 log.go:181] (0xc0009c9550) (0xc0009d0780) Create stream\nI0907 08:11:45.645888 1558 log.go:181] (0xc0009c9550) (0xc0009d0780) Stream added, broadcasting: 1\nI0907 08:11:45.651623 1558 log.go:181] (0xc0009c9550) Reply frame received for 1\nI0907 08:11:45.651661 1558 log.go:181] (0xc0009c9550) (0xc0009d0000) Create stream\nI0907 08:11:45.651670 1558 log.go:181] (0xc0009c9550) (0xc0009d0000) Stream added, broadcasting: 3\nI0907 08:11:45.652768 1558 log.go:181] (0xc0009c9550) Reply frame received for 3\nI0907 08:11:45.652813 1558 log.go:181] (0xc0009c9550) (0xc0009d00a0) Create stream\nI0907 08:11:45.652826 1558 log.go:181] (0xc0009c9550) (0xc0009d00a0) Stream added, broadcasting: 5\nI0907 08:11:45.653736 1558 log.go:181] (0xc0009c9550) Reply frame received for 5\nI0907 08:11:45.735619 1558 log.go:181] (0xc0009c9550) Data frame received for 5\nI0907 08:11:45.735649 1558 log.go:181] (0xc0009d00a0) (5) Data frame handling\nI0907 08:11:45.735666 1558 log.go:181] (0xc0009d00a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0907 08:11:45.736294 1558 log.go:181] (0xc0009c9550) Data frame received for 5\nI0907 08:11:45.736322 1558 log.go:181] (0xc0009d00a0) (5) Data frame handling\nI0907 08:11:45.736339 1558 log.go:181] (0xc0009d00a0) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0907 08:11:45.736807 1558 log.go:181] (0xc0009c9550) Data frame received for 5\nI0907 08:11:45.736830 1558 log.go:181] (0xc0009d00a0) (5) Data frame handling\nI0907 08:11:45.737063 1558 log.go:181] (0xc0009c9550) Data frame received for 3\nI0907 08:11:45.737090 1558 log.go:181] (0xc0009d0000) (3) Data frame handling\nI0907 08:11:45.738871 1558 log.go:181] (0xc0009c9550) Data frame received for 1\nI0907 08:11:45.738903 1558 log.go:181] (0xc0009d0780) (1) Data frame handling\nI0907 08:11:45.738933 1558 log.go:181] (0xc0009d0780) (1) Data frame sent\nI0907 08:11:45.738961 1558 log.go:181] (0xc0009c9550) (0xc0009d0780) Stream removed, broadcasting: 1\nI0907 08:11:45.738997 1558 log.go:181] (0xc0009c9550) Go away received\nI0907 08:11:45.739338 1558 log.go:181] (0xc0009c9550) (0xc0009d0780) Stream removed, broadcasting: 1\nI0907 08:11:45.739357 1558 log.go:181] (0xc0009c9550) (0xc0009d0000) Stream removed, broadcasting: 3\nI0907 08:11:45.739365 1558 log.go:181] (0xc0009c9550) (0xc0009d00a0) Stream removed, broadcasting: 5\n" Sep 7 08:11:45.743: INFO: stdout: "" Sep 7 08:11:45.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-5028 execpod-affinity2zrzg -- /bin/sh -x -c nc -zv -t -w 2 10.98.71.161 80' Sep 7 08:11:45.973: INFO: stderr: "I0907 08:11:45.891246 1576 log.go:181] (0xc00003a420) (0xc0005da000) Create stream\nI0907 08:11:45.891313 1576 log.go:181] (0xc00003a420) (0xc0005da000) Stream added, broadcasting: 1\nI0907 08:11:45.895603 1576 log.go:181] (0xc00003a420) Reply frame received for 1\nI0907 08:11:45.895673 1576 log.go:181] (0xc00003a420) (0xc000532000) Create stream\nI0907 08:11:45.895698 1576 log.go:181] (0xc00003a420) (0xc000532000) Stream added, broadcasting: 3\nI0907 08:11:45.896941 1576 log.go:181] (0xc00003a420) Reply frame received for 3\nI0907 08:11:45.896976 1576 log.go:181] (0xc00003a420) (0xc000a1a000) Create stream\nI0907 08:11:45.896990 1576 log.go:181] (0xc00003a420) (0xc000a1a000) Stream added, broadcasting: 5\nI0907 08:11:45.897914 1576 log.go:181] (0xc00003a420) Reply frame received for 5\nI0907 08:11:45.966180 1576 log.go:181] (0xc00003a420) Data frame received for 3\nI0907 08:11:45.966226 1576 log.go:181] (0xc000532000) (3) Data frame handling\nI0907 08:11:45.966247 1576 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 08:11:45.966253 1576 log.go:181] (0xc000a1a000) (5) Data frame handling\nI0907 08:11:45.966260 1576 log.go:181] (0xc000a1a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.98.71.161 80\nConnection to 10.98.71.161 80 port [tcp/http] succeeded!\nI0907 08:11:45.966344 1576 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 08:11:45.966383 1576 log.go:181] (0xc000a1a000) (5) Data frame handling\nI0907 08:11:45.968131 1576 log.go:181] (0xc00003a420) Data frame received for 1\nI0907 08:11:45.968179 1576 log.go:181] (0xc0005da000) (1) Data frame handling\nI0907 08:11:45.968206 1576 log.go:181] (0xc0005da000) (1) Data frame sent\nI0907 08:11:45.968232 1576 log.go:181] (0xc00003a420) (0xc0005da000) Stream removed, broadcasting: 1\nI0907 08:11:45.968261 1576 log.go:181] (0xc00003a420) Go away received\nI0907 08:11:45.968839 1576 log.go:181] (0xc00003a420) (0xc0005da000) Stream removed, broadcasting: 1\nI0907 08:11:45.968864 1576 log.go:181] (0xc00003a420) (0xc000532000) Stream removed, broadcasting: 3\nI0907 08:11:45.968877 1576 log.go:181] (0xc00003a420) (0xc000a1a000) Stream removed, broadcasting: 5\n" Sep 7 08:11:45.973: INFO: stdout: "" Sep 7 08:11:45.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-5028 execpod-affinity2zrzg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.71.161:80/ ; done' Sep 7 08:11:46.297: INFO: stderr: "I0907 08:11:46.102960 1594 log.go:181] (0xc0007b5130) (0xc0007ac960) Create stream\nI0907 08:11:46.103019 1594 log.go:181] (0xc0007b5130) (0xc0007ac960) Stream added, broadcasting: 1\nI0907 08:11:46.108104 1594 log.go:181] (0xc0007b5130) Reply frame received for 1\nI0907 08:11:46.108149 1594 log.go:181] (0xc0007b5130) (0xc000ca0000) Create stream\nI0907 08:11:46.108164 1594 log.go:181] (0xc0007b5130) (0xc000ca0000) Stream added, broadcasting: 3\nI0907 08:11:46.109106 1594 log.go:181] (0xc0007b5130) Reply frame received for 3\nI0907 08:11:46.109149 1594 log.go:181] (0xc0007b5130) (0xc000ca00a0) Create stream\nI0907 08:11:46.109161 1594 log.go:181] (0xc0007b5130) (0xc000ca00a0) Stream added, broadcasting: 5\nI0907 08:11:46.109967 1594 log.go:181] (0xc0007b5130) Reply frame received for 5\nI0907 08:11:46.178691 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.178734 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.178749 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.178779 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.178821 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.178874 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.186046 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.186083 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.186120 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.186312 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.186340 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.186354 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.186369 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.186380 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.186397 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.186408 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.186421 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.186470 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.193259 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.193289 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.193415 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.193890 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.193941 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.193971 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.194013 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.194054 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.194091 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.199397 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.199432 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.199452 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.200476 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.200499 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.200531 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.200628 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.200648 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.200667 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.204464 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.204482 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.204494 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.205248 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.205289 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.205302 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.205321 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.205331 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.205340 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.210661 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.210686 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.210708 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.210984 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.211003 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.211012 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.211027 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.211041 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.211055 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.218402 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.218421 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.218435 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.219110 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.219141 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.219173 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.219185 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.219203 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.219213 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.224702 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.224719 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.224728 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.225267 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.225292 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.225305 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.225327 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.225340 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.225358 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.225371 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.225384 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.225411 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.230481 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.230496 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.230504 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.231273 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.231292 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.231308 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.231331 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.231353 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.231384 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.238502 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.238521 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.238532 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.239296 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.239309 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.239317 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.239416 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.239437 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.239455 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.244541 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.244565 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.244581 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.245554 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.245586 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.245607 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.245763 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.245788 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.245807 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.251751 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.251767 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.251780 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.252810 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.252848 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.252863 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.252883 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.252892 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.252905 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.256799 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.256834 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.256867 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.257357 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.257379 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.257402 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.257427 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.257442 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.257461 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.265441 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.265460 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.265475 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.266421 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.266452 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.266464 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.266483 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.266496 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.266520 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.273648 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.273673 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.273687 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.274350 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.274365 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.274373 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\nI0907 08:11:46.274381 1594 log.go:181] (0xc0007b5130) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0907 08:11:46.274386 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.274420 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n http://10.98.71.161:80/\nI0907 08:11:46.274449 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.274475 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.274502 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.280732 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.280746 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.280754 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.281563 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.281581 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.281605 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.281623 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.281632 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.281642 1594 log.go:181] (0xc000ca00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.289008 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.289033 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.289073 1594 log.go:181] (0xc000ca0000) (3) Data frame sent\nI0907 08:11:46.289846 1594 log.go:181] (0xc0007b5130) Data frame received for 3\nI0907 08:11:46.289871 1594 log.go:181] (0xc000ca0000) (3) Data frame handling\nI0907 08:11:46.289894 1594 log.go:181] (0xc0007b5130) Data frame received for 5\nI0907 08:11:46.289917 1594 log.go:181] (0xc000ca00a0) (5) Data frame handling\nI0907 08:11:46.292367 1594 log.go:181] (0xc0007b5130) Data frame received for 1\nI0907 08:11:46.292402 1594 log.go:181] (0xc0007ac960) (1) Data frame handling\nI0907 08:11:46.292426 1594 log.go:181] (0xc0007ac960) (1) Data frame sent\nI0907 08:11:46.292450 1594 log.go:181] (0xc0007b5130) (0xc0007ac960) Stream removed, broadcasting: 1\nI0907 08:11:46.292485 1594 log.go:181] (0xc0007b5130) Go away received\nI0907 08:11:46.292928 1594 log.go:181] (0xc0007b5130) (0xc0007ac960) Stream removed, broadcasting: 1\nI0907 08:11:46.292951 1594 log.go:181] (0xc0007b5130) (0xc000ca0000) Stream removed, broadcasting: 3\nI0907 08:11:46.292964 1594 log.go:181] (0xc0007b5130) (0xc000ca00a0) Stream removed, broadcasting: 5\n" Sep 7 08:11:46.298: INFO: stdout: "\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc\naffinity-clusterip-timeout-btnqc" Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Received response from host: affinity-clusterip-timeout-btnqc Sep 7 08:11:46.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-5028 execpod-affinity2zrzg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.98.71.161:80/' Sep 7 08:11:46.517: INFO: stderr: "I0907 08:11:46.440325 1612 log.go:181] (0xc0009b8000) (0xc0009b0000) Create stream\nI0907 08:11:46.440388 1612 log.go:181] (0xc0009b8000) (0xc0009b0000) Stream added, broadcasting: 1\nI0907 08:11:46.442947 1612 log.go:181] (0xc0009b8000) Reply frame received for 1\nI0907 08:11:46.443021 1612 log.go:181] (0xc0009b8000) (0xc0009b00a0) Create stream\nI0907 08:11:46.443046 1612 log.go:181] (0xc0009b8000) (0xc0009b00a0) Stream added, broadcasting: 3\nI0907 08:11:46.444631 1612 log.go:181] (0xc0009b8000) Reply frame received for 3\nI0907 08:11:46.444669 1612 log.go:181] (0xc0009b8000) (0xc000a103c0) Create stream\nI0907 08:11:46.444691 1612 log.go:181] (0xc0009b8000) (0xc000a103c0) Stream added, broadcasting: 5\nI0907 08:11:46.445794 1612 log.go:181] (0xc0009b8000) Reply frame received for 5\nI0907 08:11:46.506412 1612 log.go:181] (0xc0009b8000) Data frame received for 5\nI0907 08:11:46.506439 1612 log.go:181] (0xc000a103c0) (5) Data frame handling\nI0907 08:11:46.506460 1612 log.go:181] (0xc000a103c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:11:46.510977 1612 log.go:181] (0xc0009b8000) Data frame received for 3\nI0907 08:11:46.511009 1612 log.go:181] (0xc0009b00a0) (3) Data frame handling\nI0907 08:11:46.511034 1612 log.go:181] (0xc0009b00a0) (3) Data frame sent\nI0907 08:11:46.511371 1612 log.go:181] (0xc0009b8000) Data frame received for 3\nI0907 08:11:46.511405 1612 log.go:181] (0xc0009b00a0) (3) Data frame handling\nI0907 08:11:46.511615 1612 log.go:181] (0xc0009b8000) Data frame received for 5\nI0907 08:11:46.511630 1612 log.go:181] (0xc000a103c0) (5) Data frame handling\nI0907 08:11:46.513410 1612 log.go:181] (0xc0009b8000) Data frame received for 1\nI0907 08:11:46.513427 1612 log.go:181] (0xc0009b0000) (1) Data frame handling\nI0907 08:11:46.513436 1612 log.go:181] (0xc0009b0000) (1) Data frame sent\nI0907 08:11:46.513447 1612 log.go:181] (0xc0009b8000) (0xc0009b0000) Stream removed, broadcasting: 1\nI0907 08:11:46.513510 1612 log.go:181] (0xc0009b8000) Go away received\nI0907 08:11:46.513816 1612 log.go:181] (0xc0009b8000) (0xc0009b0000) Stream removed, broadcasting: 1\nI0907 08:11:46.513831 1612 log.go:181] (0xc0009b8000) (0xc0009b00a0) Stream removed, broadcasting: 3\nI0907 08:11:46.513839 1612 log.go:181] (0xc0009b8000) (0xc000a103c0) Stream removed, broadcasting: 5\n" Sep 7 08:11:46.517: INFO: stdout: "affinity-clusterip-timeout-btnqc" Sep 7 08:12:01.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-5028 execpod-affinity2zrzg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.98.71.161:80/' Sep 7 08:12:01.781: INFO: stderr: "I0907 08:12:01.667392 1630 log.go:181] (0xc00018c370) (0xc000c9c1e0) Create stream\nI0907 08:12:01.667463 1630 log.go:181] (0xc00018c370) (0xc000c9c1e0) Stream added, broadcasting: 1\nI0907 08:12:01.669484 1630 log.go:181] (0xc00018c370) Reply frame received for 1\nI0907 08:12:01.669560 1630 log.go:181] (0xc00018c370) (0xc00043a640) Create stream\nI0907 08:12:01.669589 1630 log.go:181] (0xc00018c370) (0xc00043a640) Stream added, broadcasting: 3\nI0907 08:12:01.670611 1630 log.go:181] (0xc00018c370) Reply frame received for 3\nI0907 08:12:01.670654 1630 log.go:181] (0xc00018c370) (0xc000209e00) Create stream\nI0907 08:12:01.670667 1630 log.go:181] (0xc00018c370) (0xc000209e00) Stream added, broadcasting: 5\nI0907 08:12:01.671749 1630 log.go:181] (0xc00018c370) Reply frame received for 5\nI0907 08:12:01.770118 1630 log.go:181] (0xc00018c370) Data frame received for 5\nI0907 08:12:01.770162 1630 log.go:181] (0xc000209e00) (5) Data frame handling\nI0907 08:12:01.770194 1630 log.go:181] (0xc000209e00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.71.161:80/\nI0907 08:12:01.773739 1630 log.go:181] (0xc00018c370) Data frame received for 3\nI0907 08:12:01.773761 1630 log.go:181] (0xc00043a640) (3) Data frame handling\nI0907 08:12:01.773776 1630 log.go:181] (0xc00043a640) (3) Data frame sent\nI0907 08:12:01.774676 1630 log.go:181] (0xc00018c370) Data frame received for 5\nI0907 08:12:01.774712 1630 log.go:181] (0xc000209e00) (5) Data frame handling\nI0907 08:12:01.774744 1630 log.go:181] (0xc00018c370) Data frame received for 3\nI0907 08:12:01.774766 1630 log.go:181] (0xc00043a640) (3) Data frame handling\nI0907 08:12:01.776206 1630 log.go:181] (0xc00018c370) Data frame received for 1\nI0907 08:12:01.776239 1630 log.go:181] (0xc000c9c1e0) (1) Data frame handling\nI0907 08:12:01.776271 1630 log.go:181] (0xc000c9c1e0) (1) Data frame sent\nI0907 08:12:01.776296 1630 log.go:181] (0xc00018c370) (0xc000c9c1e0) Stream removed, broadcasting: 1\nI0907 08:12:01.776330 1630 log.go:181] (0xc00018c370) Go away received\nI0907 08:12:01.776846 1630 log.go:181] (0xc00018c370) (0xc000c9c1e0) Stream removed, broadcasting: 1\nI0907 08:12:01.776868 1630 log.go:181] (0xc00018c370) (0xc00043a640) Stream removed, broadcasting: 3\nI0907 08:12:01.776878 1630 log.go:181] (0xc00018c370) (0xc000209e00) Stream removed, broadcasting: 5\n" Sep 7 08:12:01.781: INFO: stdout: "affinity-clusterip-timeout-kbtlj" Sep 7 08:12:01.781: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5028, will wait for the garbage collector to delete the pods Sep 7 08:12:01.912: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.390611ms Sep 7 08:12:02.312: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.193494ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:12:12.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5028" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:46.500 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":103,"skipped":1426,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:12:12.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 7 08:12:19.521: INFO: 10 pods remaining Sep 7 08:12:19.521: INFO: 10 pods has nil DeletionTimestamp Sep 7 08:12:19.521: INFO: Sep 7 08:12:21.579: INFO: 0 pods remaining Sep 7 08:12:21.580: INFO: 0 pods has nil DeletionTimestamp Sep 7 08:12:21.580: INFO: Sep 7 08:12:23.088: INFO: 0 pods remaining Sep 7 08:12:23.088: INFO: 0 pods has nil DeletionTimestamp Sep 7 08:12:23.088: INFO: STEP: Gathering metrics W0907 08:12:23.455461 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 7 08:13:25.474: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:13:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1192" for this suite. • [SLOW TEST:73.029 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":104,"skipped":1430,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:13:25.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 7 08:13:25.581: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 7 08:13:25.589: INFO: Waiting for terminating namespaces to be deleted... Sep 7 08:13:25.591: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 7 08:13:25.596: INFO: kindnet-d72xf from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 08:13:25.596: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 08:13:25.596: INFO: kube-proxy-64mm6 from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 08:13:25.596: INFO: Container kube-proxy ready: true, restart count 0 Sep 7 08:13:25.596: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 7 08:13:25.600: INFO: kindnet-dktmm from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 08:13:25.601: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 08:13:25.601: INFO: kube-proxy-b55gf from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 08:13:25.601: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ed227466-b26a-4390-82e2-c5c0deab2131 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ed227466-b26a-4390-82e2-c5c0deab2131 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ed227466-b26a-4390-82e2-c5c0deab2131 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:13:33.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4811" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.383 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":105,"skipped":1445,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:13:33.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4205 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 7 08:13:33.923: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 7 08:13:34.065: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:13:36.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:13:38.070: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:40.068: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:42.069: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:44.070: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:46.068: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:48.070: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:50.069: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:13:52.069: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 7 08:13:52.075: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 7 08:13:54.079: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 7 08:13:58.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=http&host=10.244.2.69&port=8080&tries=1'] Namespace:pod-network-test-4205 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:13:58.117: INFO: >>> kubeConfig: /root/.kube/config I0907 08:13:58.157230 7 log.go:181] (0xc0002eda20) (0xc000520500) Create stream I0907 08:13:58.157269 7 log.go:181] (0xc0002eda20) (0xc000520500) Stream added, broadcasting: 1 I0907 08:13:58.159235 7 log.go:181] (0xc0002eda20) Reply frame received for 1 I0907 08:13:58.159274 7 log.go:181] (0xc0002eda20) (0xc00074a8c0) Create stream I0907 08:13:58.159283 7 log.go:181] (0xc0002eda20) (0xc00074a8c0) Stream added, broadcasting: 3 I0907 08:13:58.159949 7 log.go:181] (0xc0002eda20) Reply frame received for 3 I0907 08:13:58.159977 7 log.go:181] (0xc0002eda20) (0xc0011848c0) Create stream I0907 08:13:58.159985 7 log.go:181] (0xc0002eda20) (0xc0011848c0) Stream added, broadcasting: 5 I0907 08:13:58.160866 7 log.go:181] (0xc0002eda20) Reply frame received for 5 I0907 08:13:58.221672 7 log.go:181] (0xc0002eda20) Data frame received for 3 I0907 08:13:58.221698 7 log.go:181] (0xc00074a8c0) (3) Data frame handling I0907 08:13:58.221719 7 log.go:181] (0xc00074a8c0) (3) Data frame sent I0907 08:13:58.222323 7 log.go:181] (0xc0002eda20) Data frame received for 3 I0907 08:13:58.222356 7 log.go:181] (0xc00074a8c0) (3) Data frame handling I0907 08:13:58.222376 7 log.go:181] (0xc0002eda20) Data frame received for 5 I0907 08:13:58.222385 7 log.go:181] (0xc0011848c0) (5) Data frame handling I0907 08:13:58.224293 7 log.go:181] (0xc0002eda20) Data frame received for 1 I0907 08:13:58.224321 7 log.go:181] (0xc000520500) (1) Data frame handling I0907 08:13:58.224343 7 log.go:181] (0xc000520500) (1) Data frame sent I0907 08:13:58.224364 7 log.go:181] (0xc0002eda20) (0xc000520500) Stream removed, broadcasting: 1 I0907 08:13:58.224475 7 log.go:181] (0xc0002eda20) Go away received I0907 08:13:58.224914 7 log.go:181] (0xc0002eda20) (0xc000520500) Stream removed, broadcasting: 1 I0907 08:13:58.224932 7 log.go:181] (0xc0002eda20) (0xc00074a8c0) Stream removed, broadcasting: 3 I0907 08:13:58.224942 7 log.go:181] (0xc0002eda20) (0xc0011848c0) Stream removed, broadcasting: 5 Sep 7 08:13:58.224: INFO: Waiting for responses: map[] Sep 7 08:13:58.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.70:8080/dial?request=hostname&protocol=http&host=10.244.1.51&port=8080&tries=1'] Namespace:pod-network-test-4205 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:13:58.229: INFO: >>> kubeConfig: /root/.kube/config I0907 08:13:58.266620 7 log.go:181] (0xc0006428f0) (0xc0011855e0) Create stream I0907 08:13:58.266665 7 log.go:181] (0xc0006428f0) (0xc0011855e0) Stream added, broadcasting: 1 I0907 08:13:58.270192 7 log.go:181] (0xc0006428f0) Reply frame received for 1 I0907 08:13:58.270273 7 log.go:181] (0xc0006428f0) (0xc000520aa0) Create stream I0907 08:13:58.270307 7 log.go:181] (0xc0006428f0) (0xc000520aa0) Stream added, broadcasting: 3 I0907 08:13:58.271215 7 log.go:181] (0xc0006428f0) Reply frame received for 3 I0907 08:13:58.271263 7 log.go:181] (0xc0006428f0) (0xc001fb9c20) Create stream I0907 08:13:58.271286 7 log.go:181] (0xc0006428f0) (0xc001fb9c20) Stream added, broadcasting: 5 I0907 08:13:58.272463 7 log.go:181] (0xc0006428f0) Reply frame received for 5 I0907 08:13:58.338639 7 log.go:181] (0xc0006428f0) Data frame received for 3 I0907 08:13:58.338674 7 log.go:181] (0xc000520aa0) (3) Data frame handling I0907 08:13:58.338690 7 log.go:181] (0xc000520aa0) (3) Data frame sent I0907 08:13:58.338877 7 log.go:181] (0xc0006428f0) Data frame received for 5 I0907 08:13:58.338899 7 log.go:181] (0xc001fb9c20) (5) Data frame handling I0907 08:13:58.339078 7 log.go:181] (0xc0006428f0) Data frame received for 3 I0907 08:13:58.339092 7 log.go:181] (0xc000520aa0) (3) Data frame handling I0907 08:13:58.340694 7 log.go:181] (0xc0006428f0) Data frame received for 1 I0907 08:13:58.340741 7 log.go:181] (0xc0011855e0) (1) Data frame handling I0907 08:13:58.340760 7 log.go:181] (0xc0011855e0) (1) Data frame sent I0907 08:13:58.340789 7 log.go:181] (0xc0006428f0) (0xc0011855e0) Stream removed, broadcasting: 1 I0907 08:13:58.340807 7 log.go:181] (0xc0006428f0) Go away received I0907 08:13:58.340927 7 log.go:181] (0xc0006428f0) (0xc0011855e0) Stream removed, broadcasting: 1 I0907 08:13:58.340949 7 log.go:181] (0xc0006428f0) (0xc000520aa0) Stream removed, broadcasting: 3 I0907 08:13:58.340966 7 log.go:181] (0xc0006428f0) (0xc001fb9c20) Stream removed, broadcasting: 5 Sep 7 08:13:58.341: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:13:58.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4205" for this suite. • [SLOW TEST:24.479 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1450,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:13:58.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 7 08:13:58.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4288' Sep 7 08:14:01.977: INFO: stderr: "" Sep 7 08:14:01.977: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 7 08:14:07.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4288 -o json' Sep 7 08:14:07.202: INFO: stderr: "" Sep 7 08:14:07.202: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-07T08:14:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-07T08:14:01Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.52\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-07T08:14:05Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4288\",\n \"resourceVersion\": \"281066\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4288/pods/e2e-test-httpd-pod\",\n \"uid\": \"3b7fe881-6fb0-46a8-8b4f-c57cc7aa8692\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9d9jx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9d9jx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9d9jx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T08:14:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T08:14:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T08:14:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-07T08:14:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://9b69cff35078fd7ae3493677be72b5dd5da3b85a2817b58394b4323af50809cd\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-07T08:14:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.14\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.52\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.52\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-07T08:14:02Z\"\n }\n}\n" STEP: replace the image in the pod Sep 7 08:14:07.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4288' Sep 7 08:14:07.578: INFO: stderr: "" Sep 7 08:14:07.578: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Sep 7 08:14:07.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4288' Sep 7 08:14:21.914: INFO: stderr: "" Sep 7 08:14:21.914: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:21.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4288" for this suite. • [SLOW TEST:23.592 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":107,"skipped":1455,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:21.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 7 08:14:25.127: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:25.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7207" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1462,"failed":0} ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:25.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:25.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3097" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":109,"skipped":1462,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:25.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:14:26.169: INFO: Checking APIGroup: apiregistration.k8s.io Sep 7 08:14:26.170: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 7 08:14:26.170: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.170: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 7 08:14:26.170: INFO: Checking APIGroup: extensions Sep 7 08:14:26.171: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 7 08:14:26.171: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 7 08:14:26.171: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 7 08:14:26.171: INFO: Checking APIGroup: apps Sep 7 08:14:26.172: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 7 08:14:26.172: INFO: Versions found [{apps/v1 v1}] Sep 7 08:14:26.172: INFO: apps/v1 matches apps/v1 Sep 7 08:14:26.172: INFO: Checking APIGroup: events.k8s.io Sep 7 08:14:26.173: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 7 08:14:26.173: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.173: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 7 08:14:26.173: INFO: Checking APIGroup: authentication.k8s.io Sep 7 08:14:26.174: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 7 08:14:26.174: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.174: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 7 08:14:26.174: INFO: Checking APIGroup: authorization.k8s.io Sep 7 08:14:26.175: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 7 08:14:26.175: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.175: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 7 08:14:26.175: INFO: Checking APIGroup: autoscaling Sep 7 08:14:26.176: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 7 08:14:26.176: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 7 08:14:26.176: INFO: autoscaling/v1 matches autoscaling/v1 Sep 7 08:14:26.176: INFO: Checking APIGroup: batch Sep 7 08:14:26.177: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 7 08:14:26.177: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 7 08:14:26.177: INFO: batch/v1 matches batch/v1 Sep 7 08:14:26.177: INFO: Checking APIGroup: certificates.k8s.io Sep 7 08:14:26.177: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 7 08:14:26.177: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.177: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 7 08:14:26.178: INFO: Checking APIGroup: networking.k8s.io Sep 7 08:14:26.178: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 7 08:14:26.178: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.178: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 7 08:14:26.178: INFO: Checking APIGroup: policy Sep 7 08:14:26.179: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 7 08:14:26.179: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 7 08:14:26.179: INFO: policy/v1beta1 matches policy/v1beta1 Sep 7 08:14:26.179: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 7 08:14:26.180: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 7 08:14:26.180: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.180: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 7 08:14:26.180: INFO: Checking APIGroup: storage.k8s.io Sep 7 08:14:26.181: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 7 08:14:26.181: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.181: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 7 08:14:26.181: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 7 08:14:26.182: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 7 08:14:26.182: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.182: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 7 08:14:26.182: INFO: Checking APIGroup: apiextensions.k8s.io Sep 7 08:14:26.182: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 7 08:14:26.182: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.183: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 7 08:14:26.183: INFO: Checking APIGroup: scheduling.k8s.io Sep 7 08:14:26.183: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 7 08:14:26.183: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.183: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 7 08:14:26.183: INFO: Checking APIGroup: coordination.k8s.io Sep 7 08:14:26.184: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 7 08:14:26.184: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.184: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 7 08:14:26.184: INFO: Checking APIGroup: node.k8s.io Sep 7 08:14:26.185: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 7 08:14:26.185: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.185: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 7 08:14:26.185: INFO: Checking APIGroup: discovery.k8s.io Sep 7 08:14:26.186: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 7 08:14:26.186: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 7 08:14:26.186: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:26.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8987" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":110,"skipped":1467,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:26.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 7 08:14:26.310: INFO: Waiting up to 5m0s for pod "pod-7ef05f80-6d23-4872-b67a-8721bb5b0499" in namespace "emptydir-908" to be "Succeeded or Failed" Sep 7 08:14:26.313: INFO: Pod "pod-7ef05f80-6d23-4872-b67a-8721bb5b0499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.833598ms Sep 7 08:14:28.317: INFO: Pod "pod-7ef05f80-6d23-4872-b67a-8721bb5b0499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006611757s Sep 7 08:14:30.623: INFO: Pod "pod-7ef05f80-6d23-4872-b67a-8721bb5b0499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312859578s STEP: Saw pod success Sep 7 08:14:30.623: INFO: Pod "pod-7ef05f80-6d23-4872-b67a-8721bb5b0499" satisfied condition "Succeeded or Failed" Sep 7 08:14:30.626: INFO: Trying to get logs from node latest-worker pod pod-7ef05f80-6d23-4872-b67a-8721bb5b0499 container test-container: STEP: delete the pod Sep 7 08:14:30.839: INFO: Waiting for pod pod-7ef05f80-6d23-4872-b67a-8721bb5b0499 to disappear Sep 7 08:14:30.846: INFO: Pod pod-7ef05f80-6d23-4872-b67a-8721bb5b0499 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-908" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1479,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:30.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:14:31.687: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:14:33.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:14:35.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063271, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:14:38.765: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:39.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4266" for this suite. STEP: Destroying namespace "webhook-4266-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.572 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":112,"skipped":1489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:39.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 7 08:14:39.528: INFO: Waiting up to 5m0s for pod "pod-f078162e-0743-4555-bb54-e5c43d85eb70" in namespace "emptydir-6547" to be "Succeeded or Failed" Sep 7 08:14:39.531: INFO: Pod "pod-f078162e-0743-4555-bb54-e5c43d85eb70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.350009ms Sep 7 08:14:41.652: INFO: Pod "pod-f078162e-0743-4555-bb54-e5c43d85eb70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124643265s Sep 7 08:14:43.657: INFO: Pod "pod-f078162e-0743-4555-bb54-e5c43d85eb70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129385516s STEP: Saw pod success Sep 7 08:14:43.657: INFO: Pod "pod-f078162e-0743-4555-bb54-e5c43d85eb70" satisfied condition "Succeeded or Failed" Sep 7 08:14:43.660: INFO: Trying to get logs from node latest-worker2 pod pod-f078162e-0743-4555-bb54-e5c43d85eb70 container test-container: STEP: delete the pod Sep 7 08:14:43.706: INFO: Waiting for pod pod-f078162e-0743-4555-bb54-e5c43d85eb70 to disappear Sep 7 08:14:43.746: INFO: Pod pod-f078162e-0743-4555-bb54-e5c43d85eb70 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:43.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6547" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1550,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:43.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:14:43.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c" in namespace "projected-3050" to be "Succeeded or Failed" Sep 7 08:14:43.811: INFO: Pod "downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092924ms Sep 7 08:14:45.815: INFO: Pod "downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008701386s Sep 7 08:14:47.820: INFO: Pod "downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013699807s STEP: Saw pod success Sep 7 08:14:47.820: INFO: Pod "downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c" satisfied condition "Succeeded or Failed" Sep 7 08:14:47.823: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c container client-container: STEP: delete the pod Sep 7 08:14:47.911: INFO: Waiting for pod downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c to disappear Sep 7 08:14:47.914: INFO: Pod downwardapi-volume-c7081c91-8db4-4693-8a90-a42ecbb4028c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:47.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3050" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":1556,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:47.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-75e29799-d647-43e1-9367-69b20bd71d1d STEP: Creating a pod to test consume secrets Sep 7 08:14:47.993: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5" in namespace "projected-5052" to be "Succeeded or Failed" Sep 7 08:14:48.009: INFO: Pod "pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.861949ms Sep 7 08:14:50.014: INFO: Pod "pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021173499s Sep 7 08:14:52.019: INFO: Pod "pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025578865s STEP: Saw pod success Sep 7 08:14:52.019: INFO: Pod "pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5" satisfied condition "Succeeded or Failed" Sep 7 08:14:52.021: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5 container projected-secret-volume-test: STEP: delete the pod Sep 7 08:14:52.078: INFO: Waiting for pod pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5 to disappear Sep 7 08:14:52.088: INFO: Pod pod-projected-secrets-16f2494b-355e-457f-a0b9-585ba351c1e5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:52.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5052" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1565,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:52.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:14:52.143: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:14:52.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8043" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":116,"skipped":1573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:14:52.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:14:52.943: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 7 08:14:52.968: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 7 08:14:57.972: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 7 08:14:57.972: INFO: Creating deployment "test-rolling-update-deployment" Sep 7 08:14:57.977: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 7 08:14:57.982: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 7 08:15:00.014: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 7 08:15:00.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063298, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063298, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063298, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063298, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:15:02.020: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 7 08:15:02.029: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8519 /apis/apps/v1/namespaces/deployment-8519/deployments/test-rolling-update-deployment 360d691b-3f88-4eb3-9b76-9fcd9ca58ccb 281550 1 2020-09-07 08:14:57 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-09-07 08:14:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 08:15:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005b052c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-07 08:14:58 +0000 UTC,LastTransitionTime:2020-09-07 08:14:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-09-07 08:15:00 +0000 UTC,LastTransitionTime:2020-09-07 08:14:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 7 08:15:02.032: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-8519 /apis/apps/v1/namespaces/deployment-8519/replicasets/test-rolling-update-deployment-c4cb8d6d9 c6eeed9b-a336-4675-9d7b-dcce3562d077 281539 1 2020-09-07 08:14:57 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 360d691b-3f88-4eb3-9b76-9fcd9ca58ccb 0xc0044a3b90 0xc0044a3b91}] [] [{kube-controller-manager Update apps/v1 2020-09-07 08:15:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"360d691b-3f88-4eb3-9b76-9fcd9ca58ccb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044a3c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:15:02.032: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 7 08:15:02.032: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8519 /apis/apps/v1/namespaces/deployment-8519/replicasets/test-rolling-update-controller 29e9f339-05be-478e-9a98-5a5b84d6c41f 281549 2 2020-09-07 08:14:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 360d691b-3f88-4eb3-9b76-9fcd9ca58ccb 0xc0044a3a87 0xc0044a3a88}] [] [{e2e.test Update apps/v1 2020-09-07 08:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 08:15:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"360d691b-3f88-4eb3-9b76-9fcd9ca58ccb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0044a3b28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:15:02.035: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-5l5ww" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-5l5ww test-rolling-update-deployment-c4cb8d6d9- deployment-8519 /api/v1/namespaces/deployment-8519/pods/test-rolling-update-deployment-c4cb8d6d9-5l5ww e7a34168-5403-41ff-b0ff-78df5ba295d8 281538 0 2020-09-07 08:14:58 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 c6eeed9b-a336-4675-9d7b-dcce3562d077 0xc0031ce0b0 0xc0031ce0b1}] [] [{kube-controller-manager Update v1 2020-09-07 08:14:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6eeed9b-a336-4675-9d7b-dcce3562d077\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:15:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f2h6q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f2h6q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f2h6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:14:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:15:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:15:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:14:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.55,StartTime:2020-09-07 08:14:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:15:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://39280f7dcc4e88725abc32aacc4e422867a92747b9c8605271f6ab559e66a4fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:15:02.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8519" for this suite. • [SLOW TEST:9.247 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":117,"skipped":1634,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:15:02.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Sep 7 08:15:02.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3909' Sep 7 08:15:02.484: INFO: stderr: "" Sep 7 08:15:02.484: INFO: stdout: "pod/pause created\n" Sep 7 08:15:02.484: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 7 08:15:02.485: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3909" to be "running and ready" Sep 7 08:15:02.521: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 36.460192ms Sep 7 08:15:04.525: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040574305s Sep 7 08:15:06.530: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.045571157s Sep 7 08:15:06.530: INFO: Pod "pause" satisfied condition "running and ready" Sep 7 08:15:06.530: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 7 08:15:06.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3909' Sep 7 08:15:06.626: INFO: stderr: "" Sep 7 08:15:06.626: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 7 08:15:06.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3909' Sep 7 08:15:06.736: INFO: stderr: "" Sep 7 08:15:06.736: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 7 08:15:06.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3909' Sep 7 08:15:06.859: INFO: stderr: "" Sep 7 08:15:06.859: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 7 08:15:06.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3909' Sep 7 08:15:06.967: INFO: stderr: "" Sep 7 08:15:06.967: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Sep 7 08:15:06.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3909' Sep 7 08:15:07.174: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:15:07.174: INFO: stdout: "pod \"pause\" force deleted\n" Sep 7 08:15:07.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3909' Sep 7 08:15:07.616: INFO: stderr: "No resources found in kubectl-3909 namespace.\n" Sep 7 08:15:07.616: INFO: stdout: "" Sep 7 08:15:07.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3909 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 7 08:15:07.828: INFO: stderr: "" Sep 7 08:15:07.828: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:15:07.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3909" for this suite. • [SLOW TEST:5.853 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":118,"skipped":1653,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:15:07.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 7 08:15:08.596: INFO: Waiting up to 5m0s for pod "client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a" in namespace "containers-7474" to be "Succeeded or Failed" Sep 7 08:15:08.748: INFO: Pod "client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a": Phase="Pending", Reason="", readiness=false. Elapsed: 152.08209ms Sep 7 08:15:10.934: INFO: Pod "client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337785548s Sep 7 08:15:12.944: INFO: Pod "client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.347289589s STEP: Saw pod success Sep 7 08:15:12.944: INFO: Pod "client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a" satisfied condition "Succeeded or Failed" Sep 7 08:15:12.946: INFO: Trying to get logs from node latest-worker2 pod client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a container test-container: STEP: delete the pod Sep 7 08:15:12.982: INFO: Waiting for pod client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a to disappear Sep 7 08:15:13.053: INFO: Pod client-containers-5c2cf91f-fa55-4906-8b1e-cdba233e352a no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:15:13.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7474" for this suite. • [SLOW TEST:5.164 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":1666,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:15:13.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 7 08:15:21.426: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:21.432: INFO: Pod pod-with-prestop-exec-hook still exists Sep 7 08:15:23.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:23.437: INFO: Pod pod-with-prestop-exec-hook still exists Sep 7 08:15:25.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:25.436: INFO: Pod pod-with-prestop-exec-hook still exists Sep 7 08:15:27.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:27.437: INFO: Pod pod-with-prestop-exec-hook still exists Sep 7 08:15:29.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:29.436: INFO: Pod pod-with-prestop-exec-hook still exists Sep 7 08:15:31.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:31.436: INFO: Pod pod-with-prestop-exec-hook still exists Sep 7 08:15:33.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 7 08:15:33.436: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:15:33.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8793" for this suite. • [SLOW TEST:20.387 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1676,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:15:33.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0907 08:16:14.827461 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 7 08:17:16.845: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 7 08:17:16.845: INFO: Deleting pod "simpletest.rc-5mmgc" in namespace "gc-9355" Sep 7 08:17:16.910: INFO: Deleting pod "simpletest.rc-6p25g" in namespace "gc-9355" Sep 7 08:17:16.958: INFO: Deleting pod "simpletest.rc-8st6h" in namespace "gc-9355" Sep 7 08:17:17.055: INFO: Deleting pod "simpletest.rc-9cmlh" in namespace "gc-9355" Sep 7 08:17:17.663: INFO: Deleting pod "simpletest.rc-kqlzc" in namespace "gc-9355" Sep 7 08:17:17.746: INFO: Deleting pod "simpletest.rc-mlxdj" in namespace "gc-9355" Sep 7 08:17:18.020: INFO: Deleting pod "simpletest.rc-n6thw" in namespace "gc-9355" Sep 7 08:17:18.243: INFO: Deleting pod "simpletest.rc-n7vfh" in namespace "gc-9355" Sep 7 08:17:18.877: INFO: Deleting pod "simpletest.rc-p5wdp" in namespace "gc-9355" Sep 7 08:17:18.944: INFO: Deleting pod "simpletest.rc-v74r2" in namespace "gc-9355" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:17:19.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9355" for this suite. • [SLOW TEST:105.817 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":121,"skipped":1687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:17:19.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:17:20.708: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:17:22.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063440, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063440, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063440, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063440, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:17:25.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 7 08:17:25.827: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:17:25.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3563" for this suite. STEP: Destroying namespace "webhook-3563-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.713 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":122,"skipped":1739,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:17:25.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:17:42.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3143" for this suite. • [SLOW TEST:16.138 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":123,"skipped":1758,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:17:42.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb Sep 7 08:17:42.276: INFO: Pod name my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb: Found 0 pods out of 1 Sep 7 08:17:47.280: INFO: Pod name my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb: Found 1 pods out of 1 Sep 7 08:17:47.280: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb" are running Sep 7 08:17:47.283: INFO: Pod "my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb-4dzq9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:17:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:17:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:17:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:17:42 +0000 UTC Reason: Message:}]) Sep 7 08:17:47.283: INFO: Trying to dial the pod Sep 7 08:17:52.295: INFO: Controller my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb: Got expected result from replica 1 [my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb-4dzq9]: "my-hostname-basic-cddd2c1e-a10f-435b-ba7b-6079bdd5debb-4dzq9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:17:52.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6713" for this suite. • [SLOW TEST:10.208 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":124,"skipped":1773,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:17:52.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:18:08.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3394" for this suite. • [SLOW TEST:16.326 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":125,"skipped":1782,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:18:08.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:18:08.720: INFO: Creating deployment "webserver-deployment" Sep 7 08:18:08.724: INFO: Waiting for observed generation 1 Sep 7 08:18:11.062: INFO: Waiting for all required pods to come up Sep 7 08:18:11.066: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 7 08:18:21.410: INFO: Waiting for deployment "webserver-deployment" to complete Sep 7 08:18:21.415: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 7 08:18:21.423: INFO: Updating deployment webserver-deployment Sep 7 08:18:21.423: INFO: Waiting for observed generation 2 Sep 7 08:18:23.433: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 7 08:18:23.435: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 7 08:18:23.437: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 7 08:18:23.444: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 7 08:18:23.444: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 7 08:18:23.446: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 7 08:18:23.450: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 7 08:18:23.450: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 7 08:18:23.460: INFO: Updating deployment webserver-deployment Sep 7 08:18:23.460: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 7 08:18:23.943: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 7 08:18:24.333: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 7 08:18:24.537: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3024 /apis/apps/v1/namespaces/deployment-3024/deployments/webserver-deployment f1d60df6-bcfa-4dd1-bc65-08b4e92f9de7 282895 3 2020-09-07 08:18:08 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-07 08:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005cb3468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-09-07 08:18:22 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-07 08:18:23 +0000 UTC,LastTransitionTime:2020-09-07 08:18:23 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 7 08:18:24.692: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-3024 /apis/apps/v1/namespaces/deployment-3024/replicasets/webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 282932 3 2020-09-07 08:18:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f1d60df6-bcfa-4dd1-bc65-08b4e92f9de7 0xc005cb38e7 0xc005cb38e8}] [] [{kube-controller-manager Update apps/v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1d60df6-bcfa-4dd1-bc65-08b4e92f9de7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005cb3968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:18:24.692: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 7 08:18:24.692: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-3024 /apis/apps/v1/namespaces/deployment-3024/replicasets/webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 282934 3 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f1d60df6-bcfa-4dd1-bc65-08b4e92f9de7 0xc005cb39c7 0xc005cb39c8}] [] [{kube-controller-manager Update apps/v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1d60df6-bcfa-4dd1-bc65-08b4e92f9de7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005cb3a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:18:24.841: INFO: Pod "webserver-deployment-795d758f88-5g529" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5g529 webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-5g529 cb87dc9a-085d-4abc-a9c4-cd2ffdcdf8bd 282837 0 2020-09-07 08:18:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc00377fb37 0xc00377fb38}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-07 08:18:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.841: INFO: Pod "webserver-deployment-795d758f88-5rjqq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5rjqq webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-5rjqq 8366b3fb-e0a0-4f77-a44a-f359bd2b3d8f 282923 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc00377fd00 0xc00377fd01}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.841: INFO: Pod "webserver-deployment-795d758f88-9hfk6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9hfk6 webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-9hfk6 a6f938e9-f192-449f-9cf4-de2fb7ec668f 282936 0 2020-09-07 08:18:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc00377fe40 0xc00377fe41}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-07 08:18:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.841: INFO: Pod "webserver-deployment-795d758f88-9sbmb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9sbmb webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-9sbmb fa824df6-ba4c-4a48-b090-e71f428447f7 282915 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc00377ffe0 0xc00377ffe1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.841: INFO: Pod "webserver-deployment-795d758f88-bgjd6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bgjd6 webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-bgjd6 f6c50ec0-ced3-4637-9528-1b61df181438 282907 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c0260 0xc0006c0261}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.842: INFO: Pod "webserver-deployment-795d758f88-bwq87" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bwq87 webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-bwq87 24e54340-ac22-487d-a785-b7ad329fc8a8 282842 0 2020-09-07 08:18:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c03c0 0xc0006c03c1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-07 08:18:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.842: INFO: Pod "webserver-deployment-795d758f88-csnw6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-csnw6 webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-csnw6 031eb1b8-f01d-4d5c-8d7a-46e80dc614f5 282933 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c05c0 0xc0006c05c1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.842: INFO: Pod "webserver-deployment-795d758f88-lfl5k" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lfl5k webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-lfl5k 8194e5ab-fd84-405e-b8a0-45e101c2b7a8 282926 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c07a0 0xc0006c07a1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.842: INFO: Pod "webserver-deployment-795d758f88-mjlhq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mjlhq webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-mjlhq 0d2f98af-2f48-4408-8089-86d48eb4f476 282897 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c0910 0xc0006c0911}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.842: INFO: Pod "webserver-deployment-795d758f88-mmszm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mmszm webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-mmszm 1b5f4ff2-b10e-457b-a0d2-b0c9cb79c31c 282916 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c0a80 0xc0006c0a81}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.843: INFO: Pod "webserver-deployment-795d758f88-nlbgp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nlbgp webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-nlbgp 9842b8c5-33ec-4c90-86aa-e76c22a4b9e6 282852 0 2020-09-07 08:18:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c0c00 0xc0006c0c01}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-07 08:18:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.843: INFO: Pod "webserver-deployment-795d758f88-znw9d" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-znw9d webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-znw9d 3511bfe9-6a71-4d24-a751-631800bf95fe 282857 0 2020-09-07 08:18:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c0ea0 0xc0006c0ea1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-07 08:18:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.843: INFO: Pod "webserver-deployment-795d758f88-zr64s" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zr64s webserver-deployment-795d758f88- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-795d758f88-zr64s 7bec5d9a-ea79-45c3-b0cc-b062f309d519 282865 0 2020-09-07 08:18:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 222bd23a-39b2-45ed-b9ef-ec05f36ab2ba 0xc0006c1080 0xc0006c1081}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"222bd23a-39b2-45ed-b9ef-ec05f36ab2ba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-07 08:18:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.843: INFO: Pod "webserver-deployment-dd94f59b7-5srl7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5srl7 webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-5srl7 806a9071-4bc9-4984-9997-48e44c54173b 282896 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc0006c12f0 0xc0006c12f1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.843: INFO: Pod "webserver-deployment-dd94f59b7-6mnwd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6mnwd webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-6mnwd 548d67f8-bd83-4cec-8c04-66f87e2fa1c9 282886 0 2020-09-07 08:18:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc0006c1800 0xc0006c1801}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.844: INFO: Pod "webserver-deployment-dd94f59b7-8rj8v" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8rj8v webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-8rj8v b62d9c17-a235-44a0-bbc4-dfc90647b3fc 282756 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc0006c19c0 0xc0006c19c1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.87\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.87,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://80fec1fd25408be8c4d027ebe0434373ab27cc53e4ac7c7aa710157d196fb6e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.844: INFO: Pod "webserver-deployment-dd94f59b7-8sqgm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8sqgm webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-8sqgm 01c794da-486b-41c4-bd93-993217ddda4a 282797 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc0006c1cf7 0xc0006c1cf8}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.90,StartTime:2020-09-07 08:18:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f5b6e3d1d5bb6fa11298d61e67ef93baabfd57a4e2251bb3954ac0211d2f533b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.844: INFO: Pod "webserver-deployment-dd94f59b7-98dhs" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-98dhs webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-98dhs 633c1749-9703-49f2-8b67-d8a7ca7cf000 282906 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c02437 0xc000c02438}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.844: INFO: Pod "webserver-deployment-dd94f59b7-dmlh6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dmlh6 webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-dmlh6 fba6fb84-2296-4da1-b4c8-20d6a1ad2ce3 282898 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c02620 0xc000c02621}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.845: INFO: Pod "webserver-deployment-dd94f59b7-hgt4f" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hgt4f webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-hgt4f 11c7dbe3-b716-4739-8405-8adefc93a542 282777 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c02b60 0xc000c02b61}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.89\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.89,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a8426494083e0213c3cbe3401cab99215183d189aac0fb91085261ae7252e71,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.845: INFO: Pod "webserver-deployment-dd94f59b7-jdsmn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jdsmn webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-jdsmn e3c1b76e-a5f3-4302-a61b-feb4ae4d7e55 282917 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c02ee7 0xc000c02ee8}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.845: INFO: Pod "webserver-deployment-dd94f59b7-kb24p" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kb24p webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-kb24p da2c57a0-b8bb-4300-8324-f3016e92a1c2 282910 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c030d0 0xc000c030d1}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.845: INFO: Pod "webserver-deployment-dd94f59b7-kqzfp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kqzfp webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-kqzfp 3017ac0f-606c-4fa6-a073-b60b79541ad1 282913 0 2020-09-07 08:18:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c03380 0xc000c03381}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-07 08:18:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.845: INFO: Pod "webserver-deployment-dd94f59b7-lm4lx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lm4lx webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-lm4lx 23910421-d6bc-4e88-b3ec-9755f5eff9a5 282927 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c03547 0xc000c03548}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.845: INFO: Pod "webserver-deployment-dd94f59b7-ndj2z" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ndj2z webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-ndj2z 7484fb66-23a0-4d45-afbd-0c2f91e0612c 282785 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c03710 0xc000c03711}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.69,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bd9a09f341d73aeabf2b245d69e46222fe6d8aa8352b13ed32eed024ad170175,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.846: INFO: Pod "webserver-deployment-dd94f59b7-pfs6t" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pfs6t webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-pfs6t e4918d19-c702-40af-a7c7-f5ef833d02b8 282749 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c03a77 0xc000c03a78}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.86\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.86,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2e29ab19aadbeba733c5cfc736c94c6a231a3a59e70e680befd777759e3909aa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.846: INFO: Pod "webserver-deployment-dd94f59b7-plbbd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-plbbd webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-plbbd 056069ce-025a-4b4f-86b0-39b2d9eb22e0 282921 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c03d27 0xc000c03d28}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.846: INFO: Pod "webserver-deployment-dd94f59b7-qx7g9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qx7g9 webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-qx7g9 1bc68231-1f2d-4ed6-b7f1-f625630f5f9d 282792 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc000c03e60 0xc000c03e61}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.70,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b52b7db25f2686901380e1fba8254c60428947909c8ee1470fb3a2cc3e00632d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.846: INFO: Pod "webserver-deployment-dd94f59b7-t9xbg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t9xbg webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-t9xbg 4c4fee2a-2d97-4122-9e1a-04b4f33831ff 282803 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc002966197 0xc002966198}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.68,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c7e76e30ef1703b348ee19b1a770c3168eccecf00613774d522a29121ec71083,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.846: INFO: Pod "webserver-deployment-dd94f59b7-vszlw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vszlw webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-vszlw 5df1ec21-74a0-4622-9311-9bc233eaaa84 282935 0 2020-09-07 08:18:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc002966587 0xc002966588}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-09-07 08:18:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.847: INFO: Pod "webserver-deployment-dd94f59b7-xq44b" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xq44b webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-xq44b dc5a2aaa-8737-49a9-9004-94f13f977213 282925 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc002966727 0xc002966728}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.847: INFO: Pod "webserver-deployment-dd94f59b7-zfv9c" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zfv9c webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-zfv9c 476f6b96-75dc-41f1-a5a7-dd15f6a44875 282922 0 2020-09-07 08:18:24 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc002966850 0xc002966851}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:18:24.847: INFO: Pod "webserver-deployment-dd94f59b7-zzwnj" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zzwnj webserver-deployment-dd94f59b7- deployment-3024 /api/v1/namespaces/deployment-3024/pods/webserver-deployment-dd94f59b7-zzwnj 7dd3ef32-6d13-4513-992f-06f9e7aca98f 282767 0 2020-09-07 08:18:08 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b1c33ae-1703-4f93-b858-ddd1aec207d9 0xc002966980 0xc002966981}] [] [{kube-controller-manager Update v1 2020-09-07 08:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b1c33ae-1703-4f93-b858-ddd1aec207d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:18:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.88\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c2rcr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c2rcr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c2rcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:18:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.88,StartTime:2020-09-07 08:18:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:18:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efb555ce1d24a8e6512c3223ee2b2f28a6ac04095cbe33fee2e34b66a93596fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:18:24.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3024" for this suite. • [SLOW TEST:16.531 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":126,"skipped":1783,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:18:25.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 7 08:18:25.587: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6343 /api/v1/namespaces/watch-6343/configmaps/e2e-watch-test-resource-version c68e5d30-c0cc-4629-9563-9578f5307b8f 282990 0 2020-09-07 08:18:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-07 08:18:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 08:18:25.587: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6343 /api/v1/namespaces/watch-6343/configmaps/e2e-watch-test-resource-version c68e5d30-c0cc-4629-9563-9578f5307b8f 282991 0 2020-09-07 08:18:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-07 08:18:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:18:25.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6343" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":127,"skipped":1793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:18:25.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-544e5352-b7cd-4fbe-a8eb-ef99ac94e056 STEP: Creating a pod to test consume configMaps Sep 7 08:18:27.318: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9" in namespace "projected-9764" to be "Succeeded or Failed" Sep 7 08:18:27.575: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 256.352551ms Sep 7 08:18:29.579: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261053765s Sep 7 08:18:31.625: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30661679s Sep 7 08:18:34.261: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.942763533s Sep 7 08:18:36.538: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.219430804s Sep 7 08:18:38.776: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.457817505s Sep 7 08:18:40.901: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.582783867s Sep 7 08:18:43.081: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.763023982s Sep 7 08:18:45.105: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Running", Reason="", readiness=true. Elapsed: 17.786302589s Sep 7 08:18:47.109: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.790848424s STEP: Saw pod success Sep 7 08:18:47.109: INFO: Pod "pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9" satisfied condition "Succeeded or Failed" Sep 7 08:18:47.112: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9 container projected-configmap-volume-test: STEP: delete the pod Sep 7 08:18:47.322: INFO: Waiting for pod pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9 to disappear Sep 7 08:18:47.346: INFO: Pod pod-projected-configmaps-46a8b0da-90ab-484f-869a-04b69db665a9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:18:47.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9764" for this suite. • [SLOW TEST:21.785 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":128,"skipped":1830,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:18:47.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-690/secret-test-29207ea1-09ed-4844-959c-0fe21a0db0fd STEP: Creating a pod to test consume secrets Sep 7 08:18:47.503: INFO: Waiting up to 5m0s for pod "pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf" in namespace "secrets-690" to be "Succeeded or Failed" Sep 7 08:18:47.507: INFO: Pod "pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550009ms Sep 7 08:18:49.511: INFO: Pod "pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007396453s Sep 7 08:18:51.571: INFO: Pod "pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068044095s STEP: Saw pod success Sep 7 08:18:51.572: INFO: Pod "pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf" satisfied condition "Succeeded or Failed" Sep 7 08:18:51.574: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf container env-test: STEP: delete the pod Sep 7 08:18:51.616: INFO: Waiting for pod pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf to disappear Sep 7 08:18:51.627: INFO: Pod pod-configmaps-af4f851b-1b95-48d5-a0a7-5c31d71087cf no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:18:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-690" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":1851,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:18:51.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-b3070696-8518-472e-a998-55f3a90dfbf6 in namespace container-probe-2357 Sep 7 08:18:55.761: INFO: Started pod liveness-b3070696-8518-472e-a998-55f3a90dfbf6 in namespace container-probe-2357 STEP: checking the pod's current state and verifying that restartCount is present Sep 7 08:18:55.763: INFO: Initial restart count of pod liveness-b3070696-8518-472e-a998-55f3a90dfbf6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:22:56.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2357" for this suite. • [SLOW TEST:244.984 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":1860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:22:56.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ff9c6e5d-325b-4ba4-a272-e024b06a5d9d STEP: Creating a pod to test consume secrets Sep 7 08:22:56.907: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede" in namespace "projected-8824" to be "Succeeded or Failed" Sep 7 08:22:56.959: INFO: Pod "pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede": Phase="Pending", Reason="", readiness=false. Elapsed: 52.014086ms Sep 7 08:22:58.965: INFO: Pod "pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05719234s Sep 7 08:23:00.969: INFO: Pod "pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061745377s STEP: Saw pod success Sep 7 08:23:00.969: INFO: Pod "pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede" satisfied condition "Succeeded or Failed" Sep 7 08:23:00.973: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede container projected-secret-volume-test: STEP: delete the pod Sep 7 08:23:01.025: INFO: Waiting for pod pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede to disappear Sep 7 08:23:01.038: INFO: Pod pod-projected-secrets-5442b815-273a-4692-a262-77e1152cfede no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:01.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8824" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":1915,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:01.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 7 08:23:01.086: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 7 08:23:01.936: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 7 08:23:04.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:23:06.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063781, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:23:09.326: INFO: Waited 724.93144ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:09.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3192" for this suite. • [SLOW TEST:8.921 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":132,"skipped":1932,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:09.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 7 08:23:10.280: INFO: Waiting up to 5m0s for pod "downward-api-fea1c963-a179-452d-ac6c-343b34991eb4" in namespace "downward-api-5951" to be "Succeeded or Failed" Sep 7 08:23:10.284: INFO: Pod "downward-api-fea1c963-a179-452d-ac6c-343b34991eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367495ms Sep 7 08:23:12.288: INFO: Pod "downward-api-fea1c963-a179-452d-ac6c-343b34991eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008535844s Sep 7 08:23:14.294: INFO: Pod "downward-api-fea1c963-a179-452d-ac6c-343b34991eb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013795462s STEP: Saw pod success Sep 7 08:23:14.294: INFO: Pod "downward-api-fea1c963-a179-452d-ac6c-343b34991eb4" satisfied condition "Succeeded or Failed" Sep 7 08:23:14.297: INFO: Trying to get logs from node latest-worker pod downward-api-fea1c963-a179-452d-ac6c-343b34991eb4 container dapi-container: STEP: delete the pod Sep 7 08:23:14.373: INFO: Waiting for pod downward-api-fea1c963-a179-452d-ac6c-343b34991eb4 to disappear Sep 7 08:23:14.385: INFO: Pod downward-api-fea1c963-a179-452d-ac6c-343b34991eb4 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:14.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5951" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":1944,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:14.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 7 08:23:14.441: INFO: Waiting up to 5m0s for pod "pod-37ddd78a-4a90-4bfd-81a6-e24794768a31" in namespace "emptydir-3970" to be "Succeeded or Failed" Sep 7 08:23:14.451: INFO: Pod "pod-37ddd78a-4a90-4bfd-81a6-e24794768a31": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07771ms Sep 7 08:23:16.456: INFO: Pod "pod-37ddd78a-4a90-4bfd-81a6-e24794768a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014517783s Sep 7 08:23:18.460: INFO: Pod "pod-37ddd78a-4a90-4bfd-81a6-e24794768a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018942771s STEP: Saw pod success Sep 7 08:23:18.460: INFO: Pod "pod-37ddd78a-4a90-4bfd-81a6-e24794768a31" satisfied condition "Succeeded or Failed" Sep 7 08:23:18.463: INFO: Trying to get logs from node latest-worker pod pod-37ddd78a-4a90-4bfd-81a6-e24794768a31 container test-container: STEP: delete the pod Sep 7 08:23:18.495: INFO: Waiting for pod pod-37ddd78a-4a90-4bfd-81a6-e24794768a31 to disappear Sep 7 08:23:18.511: INFO: Pod pod-37ddd78a-4a90-4bfd-81a6-e24794768a31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:18.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3970" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":134,"skipped":1948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:18.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:23:18.617: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:22.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8494" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":1989,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:22.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:23:22.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-555' Sep 7 08:23:23.235: INFO: stderr: "" Sep 7 08:23:23.235: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 7 08:23:23.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-555' Sep 7 08:23:23.609: INFO: stderr: "" Sep 7 08:23:23.609: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 7 08:23:24.614: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:23:24.614: INFO: Found 0 / 1 Sep 7 08:23:25.614: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:23:25.614: INFO: Found 0 / 1 Sep 7 08:23:26.635: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:23:26.635: INFO: Found 1 / 1 Sep 7 08:23:26.635: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 7 08:23:26.638: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:23:26.638: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 7 08:23:26.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config describe pod agnhost-primary-kstbt --namespace=kubectl-555' Sep 7 08:23:26.761: INFO: stderr: "" Sep 7 08:23:26.761: INFO: stdout: "Name: agnhost-primary-kstbt\nNamespace: kubectl-555\nPriority: 0\nNode: latest-worker/172.18.0.15\nStart Time: Mon, 07 Sep 2020 08:23:23 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.107\nIPs:\n IP: 10.244.2.107\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://68e1d77c51887200bf2390fe44d0c1079b2d2ad4d744593f8aebeeb0dcd0197b\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 07 Sep 2020 08:23:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gch4g (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gch4g:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gch4g\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-555/agnhost-primary-kstbt to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker Started container agnhost-primary\n" Sep 7 08:23:26.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-555' Sep 7 08:23:26.890: INFO: stderr: "" Sep 7 08:23:26.890: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-555\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-kstbt\n" Sep 7 08:23:26.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-555' Sep 7 08:23:26.993: INFO: stderr: "" Sep 7 08:23:26.993: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-555\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.108.92.28\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.107:6379\nSession Affinity: None\nEvents: \n" Sep 7 08:23:26.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config describe node latest-control-plane' Sep 7 08:23:27.130: INFO: stderr: "" Sep 7 08:23:27.130: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 06 Sep 2020 13:48:38 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 07 Sep 2020 08:23:18 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 07 Sep 2020 08:19:59 +0000 Sun, 06 Sep 2020 13:48:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 07 Sep 2020 08:19:59 +0000 Sun, 06 Sep 2020 13:48:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 07 Sep 2020 08:19:59 +0000 Sun, 06 Sep 2020 13:48:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 07 Sep 2020 08:19:59 +0000 Sun, 06 Sep 2020 13:49:21 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: d59faef4aa2a4d9b8cdafcceac6a297d\n System UUID: c61785e7-01cb-462c-b731-d83b1e2bdd6f\n Boot ID: 16f80d7c-7741-4040-9735-0d166ad57c21\n Kernel Version: 4.15.0-115-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-fchgt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18h\n kube-system coredns-f9fd979d6-l4jkb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 18h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kindnet-5sw6m 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 18h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kube-proxy-nffvr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n local-path-storage local-path-provisioner-78776bfc44-cck4c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Sep 7 08:23:27.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config describe namespace kubectl-555' Sep 7 08:23:27.228: INFO: stderr: "" Sep 7 08:23:27.228: INFO: stdout: "Name: kubectl-555\nLabels: e2e-framework=kubectl\n e2e-run=c23436dc-1de2-4766-abbf-79def5975477\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:27.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-555" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":136,"skipped":1990,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:27.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-b178b8dd-406e-4ee8-9462-76a37aa27d33 STEP: Creating a pod to test consume configMaps Sep 7 08:23:27.368: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7" in namespace "projected-5838" to be "Succeeded or Failed" Sep 7 08:23:27.415: INFO: Pod "pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.681723ms Sep 7 08:23:29.419: INFO: Pod "pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051017997s Sep 7 08:23:31.424: INFO: Pod "pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055300171s STEP: Saw pod success Sep 7 08:23:31.424: INFO: Pod "pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7" satisfied condition "Succeeded or Failed" Sep 7 08:23:31.427: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7 container projected-configmap-volume-test: STEP: delete the pod Sep 7 08:23:31.469: INFO: Waiting for pod pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7 to disappear Sep 7 08:23:31.483: INFO: Pod pod-projected-configmaps-9c7c9c54-8038-44cf-8f8f-2339d524d3c7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:31.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5838" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2003,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:31.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b8514e6e-4317-4096-a315-12c471dde2fa STEP: Creating a pod to test consume secrets Sep 7 08:23:31.586: INFO: Waiting up to 5m0s for pod "pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8" in namespace "secrets-5168" to be "Succeeded or Failed" Sep 7 08:23:31.600: INFO: Pod "pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.055874ms Sep 7 08:23:33.605: INFO: Pod "pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018630141s Sep 7 08:23:35.702: INFO: Pod "pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115321414s Sep 7 08:23:37.705: INFO: Pod "pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118396956s STEP: Saw pod success Sep 7 08:23:37.705: INFO: Pod "pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8" satisfied condition "Succeeded or Failed" Sep 7 08:23:37.707: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8 container secret-volume-test: STEP: delete the pod Sep 7 08:23:37.736: INFO: Waiting for pod pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8 to disappear Sep 7 08:23:37.739: INFO: Pod pod-secrets-6477a0af-809d-4205-9f8d-cccd8e249ea8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:23:37.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5168" for this suite. • [SLOW TEST:6.255 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:23:37.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-fcbp STEP: Creating a pod to test atomic-volume-subpath Sep 7 08:23:37.850: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fcbp" in namespace "subpath-9931" to be "Succeeded or Failed" Sep 7 08:23:37.867: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.573121ms Sep 7 08:23:40.013: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162917243s Sep 7 08:23:42.017: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 4.167247016s Sep 7 08:23:44.021: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 6.171115127s Sep 7 08:23:46.025: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 8.175311025s Sep 7 08:23:48.029: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 10.179245516s Sep 7 08:23:50.076: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 12.22585247s Sep 7 08:23:52.080: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 14.22969612s Sep 7 08:23:54.114: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 16.264031791s Sep 7 08:23:56.133: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 18.282575692s Sep 7 08:23:58.139: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 20.288510545s Sep 7 08:24:00.157: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Running", Reason="", readiness=true. Elapsed: 22.306616639s Sep 7 08:24:02.161: INFO: Pod "pod-subpath-test-secret-fcbp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.310551404s STEP: Saw pod success Sep 7 08:24:02.161: INFO: Pod "pod-subpath-test-secret-fcbp" satisfied condition "Succeeded or Failed" Sep 7 08:24:02.164: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-fcbp container test-container-subpath-secret-fcbp: STEP: delete the pod Sep 7 08:24:02.198: INFO: Waiting for pod pod-subpath-test-secret-fcbp to disappear Sep 7 08:24:02.213: INFO: Pod pod-subpath-test-secret-fcbp no longer exists STEP: Deleting pod pod-subpath-test-secret-fcbp Sep 7 08:24:02.213: INFO: Deleting pod "pod-subpath-test-secret-fcbp" in namespace "subpath-9931" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:24:02.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9931" for this suite. • [SLOW TEST:24.498 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":139,"skipped":2062,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:24:02.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-44104e50-21de-4071-80d1-6eab86644b77 STEP: Creating a pod to test consume secrets Sep 7 08:24:02.384: INFO: Waiting up to 5m0s for pod "pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7" in namespace "secrets-1280" to be "Succeeded or Failed" Sep 7 08:24:02.406: INFO: Pod "pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.260501ms Sep 7 08:24:04.409: INFO: Pod "pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024670724s Sep 7 08:24:06.426: INFO: Pod "pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041900362s STEP: Saw pod success Sep 7 08:24:06.426: INFO: Pod "pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7" satisfied condition "Succeeded or Failed" Sep 7 08:24:06.430: INFO: Trying to get logs from node latest-worker pod pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7 container secret-volume-test: STEP: delete the pod Sep 7 08:24:06.475: INFO: Waiting for pod pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7 to disappear Sep 7 08:24:06.495: INFO: Pod pod-secrets-68cffeaa-aca5-47f1-8555-9276b129a6b7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:24:06.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1280" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2064,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:24:06.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-f76eb19e-7e70-48c3-8244-ec367754dc52 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:24:06.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4504" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":141,"skipped":2070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:24:06.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 7 08:24:06.782: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:24:13.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1241" for this suite. • [SLOW TEST:6.778 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":142,"skipped":2095,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:24:13.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-88c8e750-1a49-4ad8-82f4-20fa14543e8f in namespace container-probe-4746 Sep 7 08:24:17.620: INFO: Started pod liveness-88c8e750-1a49-4ad8-82f4-20fa14543e8f in namespace container-probe-4746 STEP: checking the pod's current state and verifying that restartCount is present Sep 7 08:24:17.622: INFO: Initial restart count of pod liveness-88c8e750-1a49-4ad8-82f4-20fa14543e8f is 0 Sep 7 08:24:41.714: INFO: Restart count of pod container-probe-4746/liveness-88c8e750-1a49-4ad8-82f4-20fa14543e8f is now 1 (24.091789659s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:24:41.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4746" for this suite. • [SLOW TEST:28.318 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2104,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:24:41.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 7 08:24:42.133: INFO: Waiting up to 5m0s for pod "pod-f0dff934-57e0-4842-bb4d-ceb728d25806" in namespace "emptydir-1857" to be "Succeeded or Failed" Sep 7 08:24:42.209: INFO: Pod "pod-f0dff934-57e0-4842-bb4d-ceb728d25806": Phase="Pending", Reason="", readiness=false. Elapsed: 76.251062ms Sep 7 08:24:44.213: INFO: Pod "pod-f0dff934-57e0-4842-bb4d-ceb728d25806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07987783s Sep 7 08:24:46.218: INFO: Pod "pod-f0dff934-57e0-4842-bb4d-ceb728d25806": Phase="Running", Reason="", readiness=true. Elapsed: 4.085008835s Sep 7 08:24:48.223: INFO: Pod "pod-f0dff934-57e0-4842-bb4d-ceb728d25806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090315119s STEP: Saw pod success Sep 7 08:24:48.223: INFO: Pod "pod-f0dff934-57e0-4842-bb4d-ceb728d25806" satisfied condition "Succeeded or Failed" Sep 7 08:24:48.226: INFO: Trying to get logs from node latest-worker pod pod-f0dff934-57e0-4842-bb4d-ceb728d25806 container test-container: STEP: delete the pod Sep 7 08:24:48.267: INFO: Waiting for pod pod-f0dff934-57e0-4842-bb4d-ceb728d25806 to disappear Sep 7 08:24:48.286: INFO: Pod pod-f0dff934-57e0-4842-bb4d-ceb728d25806 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:24:48.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1857" for this suite. • [SLOW TEST:6.486 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:24:48.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9340, will wait for the garbage collector to delete the pods Sep 7 08:24:54.466: INFO: Deleting Job.batch foo took: 6.609593ms Sep 7 08:24:54.966: INFO: Terminating Job.batch foo pods took: 500.239737ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:25:36.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9340" for this suite. • [SLOW TEST:48.555 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":145,"skipped":2206,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:25:36.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:25:37.628: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:25:39.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:25:41.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735063937, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:25:45.053: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:25:45.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:25:46.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6854" for this suite. STEP: Destroying namespace "webhook-6854-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.393 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":146,"skipped":2214,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:25:46.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:26:02.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9243" for this suite. • [SLOW TEST:16.723 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":147,"skipped":2221,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:26:02.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:26:15.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5469" for this suite. • [SLOW TEST:12.456 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":148,"skipped":2227,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:26:15.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2ead1d99-9ba7-4993-abb7-11cb56298b11 STEP: Creating a pod to test consume configMaps Sep 7 08:26:16.189: INFO: Waiting up to 5m0s for pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a" in namespace "configmap-7384" to be "Succeeded or Failed" Sep 7 08:26:16.193: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.931386ms Sep 7 08:26:18.356: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166733825s Sep 7 08:26:21.943: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.753892785s Sep 7 08:26:24.260: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071004854s Sep 7 08:26:26.263: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0737594s Sep 7 08:26:28.566: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.377167772s Sep 7 08:26:31.614: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.425147857s Sep 7 08:26:33.618: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.429097019s Sep 7 08:26:36.317: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.127942437s Sep 7 08:26:38.321: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.131671321s Sep 7 08:26:40.386: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.197133359s Sep 7 08:26:42.856: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.667067452s Sep 7 08:26:45.609: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.419503536s Sep 7 08:26:47.614: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.424538839s Sep 7 08:26:49.637: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.447456946s Sep 7 08:26:52.432: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.242873652s Sep 7 08:26:55.032: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.842508832s Sep 7 08:26:57.243: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 41.054126564s Sep 7 08:26:59.387: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 43.197601583s Sep 7 08:27:04.226: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.036716354s Sep 7 08:27:07.082: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.892963334s Sep 7 08:27:09.086: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.896947656s Sep 7 08:27:11.090: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.900502714s Sep 7 08:27:13.093: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.904395353s Sep 7 08:27:16.123: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 59.933891901s Sep 7 08:27:18.128: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.938571889s Sep 7 08:27:20.131: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.942155307s Sep 7 08:27:22.586: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.396768709s Sep 7 08:27:25.138: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.94882497s Sep 7 08:27:27.141: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.952125703s Sep 7 08:27:29.146: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.957102592s Sep 7 08:27:32.051: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.862427992s Sep 7 08:27:34.538: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.349247278s Sep 7 08:27:36.542: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.352789656s Sep 7 08:27:38.545: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.355826645s Sep 7 08:27:40.554: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.365059161s Sep 7 08:27:42.813: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.623550958s Sep 7 08:27:44.927: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.737745955s Sep 7 08:27:46.931: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.742051551s Sep 7 08:27:49.538: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.348489616s Sep 7 08:27:52.128: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.939030037s Sep 7 08:27:54.131: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.942019687s Sep 7 08:27:56.135: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.946096581s Sep 7 08:27:58.164: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.975381477s Sep 7 08:28:00.167: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.977691122s Sep 7 08:28:02.465: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.276265105s Sep 7 08:28:04.469: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.280217521s Sep 7 08:28:06.472: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.28326558s Sep 7 08:28:08.580: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.39099397s Sep 7 08:28:10.585: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.395509181s Sep 7 08:28:12.589: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.400061808s Sep 7 08:28:14.914: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.72458717s Sep 7 08:28:17.346: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m1.156706789s Sep 7 08:28:19.792: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m3.603140782s Sep 7 08:28:21.796: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m5.606736556s Sep 7 08:28:23.950: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m7.76137422s Sep 7 08:28:26.190: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.000715605s Sep 7 08:28:28.194: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.004729476s Sep 7 08:28:30.639: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.449687226s Sep 7 08:28:32.642: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m16.452461339s Sep 7 08:28:34.644: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m18.454992098s Sep 7 08:28:36.647: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m20.457982191s Sep 7 08:28:38.650: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m22.461092974s Sep 7 08:28:40.653: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m24.464067408s Sep 7 08:28:42.656: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m26.467392918s Sep 7 08:28:44.674: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m28.484453124s Sep 7 08:28:46.699: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m30.510092245s Sep 7 08:28:49.035: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m32.845551111s Sep 7 08:28:51.142: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m34.953174533s Sep 7 08:28:54.514: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m38.324591451s Sep 7 08:28:57.131: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Running", Reason="", readiness=true. Elapsed: 2m40.942145718s Sep 7 08:28:59.134: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m42.94497511s STEP: Saw pod success Sep 7 08:28:59.134: INFO: Pod "pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a" satisfied condition "Succeeded or Failed" Sep 7 08:28:59.136: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a container configmap-volume-test: STEP: delete the pod Sep 7 08:28:59.704: INFO: Waiting for pod pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a to disappear Sep 7 08:29:00.023: INFO: Pod pod-configmaps-51dcfb52-e1ec-4484-bde4-43e3ea3d3c6a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:29:00.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7384" for this suite. • [SLOW TEST:165.817 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2236,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:29:01.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b379fe83-efb0-4029-a9b2-33633974bd7e STEP: Creating a pod to test consume secrets Sep 7 08:29:02.053: INFO: Waiting up to 5m0s for pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a" in namespace "secrets-784" to be "Succeeded or Failed" Sep 7 08:29:03.124: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.070759734s Sep 7 08:29:05.197: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.143195191s Sep 7 08:29:09.188: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.134845235s Sep 7 08:29:11.221: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.167317062s Sep 7 08:29:13.880: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.826699986s Sep 7 08:29:15.884: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.831027589s Sep 7 08:29:18.832: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.778638369s Sep 7 08:29:20.835: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.782048096s Sep 7 08:29:23.125: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.071916066s Sep 7 08:29:25.341: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.287786769s Sep 7 08:29:27.568: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.51458063s Sep 7 08:29:29.749: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.695401069s Sep 7 08:29:31.753: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.699753415s Sep 7 08:29:33.756: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.702788468s Sep 7 08:29:36.124: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.07113417s Sep 7 08:29:38.432: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.378531076s Sep 7 08:29:40.796: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.742617466s Sep 7 08:29:42.995: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.941817801s Sep 7 08:29:44.998: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.945141821s Sep 7 08:29:47.002: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.948766007s Sep 7 08:29:49.185: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.131712625s Sep 7 08:29:51.347: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 49.294033016s Sep 7 08:29:53.350: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.296501312s Sep 7 08:29:55.724: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.670536046s Sep 7 08:29:57.728: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.674443376s Sep 7 08:29:59.731: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.677411735s Sep 7 08:30:01.735: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 59.681186819s Sep 7 08:30:03.788: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.734453586s Sep 7 08:30:06.199: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.145677709s Sep 7 08:30:08.203: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.149300913s Sep 7 08:30:10.490: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.437107201s Sep 7 08:30:12.874: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.820418096s Sep 7 08:30:14.958: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.904818172s Sep 7 08:30:17.259: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.206070721s Sep 7 08:30:19.691: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.637569018s Sep 7 08:30:21.694: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.640828144s Sep 7 08:30:23.698: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.644286942s Sep 7 08:30:25.702: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m23.64818714s Sep 7 08:30:27.705: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m25.65127088s Sep 7 08:30:30.047: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m27.994052508s Sep 7 08:30:32.051: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m29.99773117s Sep 7 08:30:34.108: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m32.054956719s Sep 7 08:30:36.203: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.149959609s Sep 7 08:30:38.629: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.575664185s Sep 7 08:30:40.633: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m38.580111341s Sep 7 08:30:42.637: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m40.583766595s Sep 7 08:30:44.982: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m42.928706038s Sep 7 08:30:46.986: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m44.933090644s Sep 7 08:30:49.036: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m46.982809963s Sep 7 08:30:51.282: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Running", Reason="", readiness=true. Elapsed: 1m49.229041466s Sep 7 08:30:53.285: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m51.23208396s STEP: Saw pod success Sep 7 08:30:53.285: INFO: Pod "pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a" satisfied condition "Succeeded or Failed" Sep 7 08:30:53.288: INFO: Trying to get logs from node latest-worker pod pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a container secret-volume-test: STEP: delete the pod Sep 7 08:30:53.369: INFO: Waiting for pod pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a to disappear Sep 7 08:30:53.424: INFO: Pod pod-secrets-c272b6a1-1e73-4362-928d-03ddeaa0662a no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:30:53.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-784" for this suite. • [SLOW TEST:112.169 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2249,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:30:53.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 7 08:32:22.620: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1650 PodName:var-expansion-3db13d23-1d8e-48e6-88e6-91e26ed68880 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:32:22.620: INFO: >>> kubeConfig: /root/.kube/config I0907 08:32:22.649084 7 log.go:181] (0xc0004c9290) (0xc0039c2780) Create stream I0907 08:32:22.649102 7 log.go:181] (0xc0004c9290) (0xc0039c2780) Stream added, broadcasting: 1 I0907 08:32:22.651057 7 log.go:181] (0xc0004c9290) Reply frame received for 1 I0907 08:32:22.651088 7 log.go:181] (0xc0004c9290) (0xc0029cb360) Create stream I0907 08:32:22.651099 7 log.go:181] (0xc0004c9290) (0xc0029cb360) Stream added, broadcasting: 3 I0907 08:32:22.652532 7 log.go:181] (0xc0004c9290) Reply frame received for 3 I0907 08:32:22.652556 7 log.go:181] (0xc0004c9290) (0xc0035521e0) Create stream I0907 08:32:22.652564 7 log.go:181] (0xc0004c9290) (0xc0035521e0) Stream added, broadcasting: 5 I0907 08:32:22.653342 7 log.go:181] (0xc0004c9290) Reply frame received for 5 I0907 08:32:22.739860 7 log.go:181] (0xc0004c9290) Data frame received for 5 I0907 08:32:22.739894 7 log.go:181] (0xc0035521e0) (5) Data frame handling I0907 08:32:22.739914 7 log.go:181] (0xc0004c9290) Data frame received for 3 I0907 08:32:22.739926 7 log.go:181] (0xc0029cb360) (3) Data frame handling I0907 08:32:22.741213 7 log.go:181] (0xc0004c9290) Data frame received for 1 I0907 08:32:22.741259 7 log.go:181] (0xc0039c2780) (1) Data frame handling I0907 08:32:22.741306 7 log.go:181] (0xc0039c2780) (1) Data frame sent I0907 08:32:22.741336 7 log.go:181] (0xc0004c9290) (0xc0039c2780) Stream removed, broadcasting: 1 I0907 08:32:22.741356 7 log.go:181] (0xc0004c9290) Go away received I0907 08:32:22.741502 7 log.go:181] (0xc0004c9290) (0xc0039c2780) Stream removed, broadcasting: 1 I0907 08:32:22.741533 7 log.go:181] (0xc0004c9290) (0xc0029cb360) Stream removed, broadcasting: 3 I0907 08:32:22.741557 7 log.go:181] (0xc0004c9290) (0xc0035521e0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Sep 7 08:32:22.744: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1650 PodName:var-expansion-3db13d23-1d8e-48e6-88e6-91e26ed68880 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:32:22.744: INFO: >>> kubeConfig: /root/.kube/config I0907 08:32:22.777145 7 log.go:181] (0xc000d88e70) (0xc003552820) Create stream I0907 08:32:22.777174 7 log.go:181] (0xc000d88e70) (0xc003552820) Stream added, broadcasting: 1 I0907 08:32:22.779017 7 log.go:181] (0xc000d88e70) Reply frame received for 1 I0907 08:32:22.779046 7 log.go:181] (0xc000d88e70) (0xc001fb8000) Create stream I0907 08:32:22.779058 7 log.go:181] (0xc000d88e70) (0xc001fb8000) Stream added, broadcasting: 3 I0907 08:32:22.779683 7 log.go:181] (0xc000d88e70) Reply frame received for 3 I0907 08:32:22.779704 7 log.go:181] (0xc000d88e70) (0xc0029cb400) Create stream I0907 08:32:22.779713 7 log.go:181] (0xc000d88e70) (0xc0029cb400) Stream added, broadcasting: 5 I0907 08:32:22.780379 7 log.go:181] (0xc000d88e70) Reply frame received for 5 I0907 08:32:22.841376 7 log.go:181] (0xc000d88e70) Data frame received for 3 I0907 08:32:22.841399 7 log.go:181] (0xc001fb8000) (3) Data frame handling I0907 08:32:22.841614 7 log.go:181] (0xc000d88e70) Data frame received for 5 I0907 08:32:22.841628 7 log.go:181] (0xc0029cb400) (5) Data frame handling I0907 08:32:22.842296 7 log.go:181] (0xc000d88e70) Data frame received for 1 I0907 08:32:22.842313 7 log.go:181] (0xc003552820) (1) Data frame handling I0907 08:32:22.842342 7 log.go:181] (0xc003552820) (1) Data frame sent I0907 08:32:22.842358 7 log.go:181] (0xc000d88e70) (0xc003552820) Stream removed, broadcasting: 1 I0907 08:32:22.842381 7 log.go:181] (0xc000d88e70) Go away received I0907 08:32:22.842484 7 log.go:181] (0xc000d88e70) (0xc003552820) Stream removed, broadcasting: 1 I0907 08:32:22.842497 7 log.go:181] (0xc000d88e70) (0xc001fb8000) Stream removed, broadcasting: 3 I0907 08:32:22.842505 7 log.go:181] (0xc000d88e70) (0xc0029cb400) Stream removed, broadcasting: 5 STEP: updating the annotation value Sep 7 08:32:23.352: INFO: Successfully updated pod "var-expansion-3db13d23-1d8e-48e6-88e6-91e26ed68880" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 7 08:32:23.405: INFO: Deleting pod "var-expansion-3db13d23-1d8e-48e6-88e6-91e26ed68880" in namespace "var-expansion-1650" Sep 7 08:32:23.409: INFO: Wait up to 5m0s for pod "var-expansion-3db13d23-1d8e-48e6-88e6-91e26ed68880" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:34:55.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1650" for this suite. • [SLOW TEST:242.012 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":151,"skipped":2265,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:34:55.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:34:55.645: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2acb615c-7353-42f7-a7f1-788e97a296b5", Controller:(*bool)(0xc00377f002), BlockOwnerDeletion:(*bool)(0xc00377f003)}} Sep 7 08:34:55.662: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"fc651ff4-f198-4093-b748-d57e8b77b7e2", Controller:(*bool)(0xc00636856a), BlockOwnerDeletion:(*bool)(0xc00636856b)}} Sep 7 08:34:55.672: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a026321d-8e53-413b-88b5-15f9463fe3e4", Controller:(*bool)(0xc00377f212), BlockOwnerDeletion:(*bool)(0xc00377f213)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:35:01.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9156" for this suite. • [SLOW TEST:6.672 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":152,"skipped":2272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:35:02.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:35:55.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4155" for this suite. • [SLOW TEST:53.090 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":153,"skipped":2300,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:35:55.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0907 08:35:56.393696 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 7 08:36:58.406: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:36:58.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7687" for this suite. • [SLOW TEST:63.200 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":154,"skipped":2305,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:36:58.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 7 08:36:58.744: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix943132424/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:36:58.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2512" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":155,"skipped":2309,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:36:58.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:36:58.957: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:37:10.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4182" for this suite. • [SLOW TEST:13.123 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":156,"skipped":2311,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:37:11.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:37:17.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6617" for this suite. STEP: Destroying namespace "nspatchtest-9dff21e5-d119-404e-b311-5def04bba705-888" for this suite. • [SLOW TEST:6.113 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":157,"skipped":2333,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:37:18.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 7 08:38:10.720: INFO: Pod pod-hostip-f8ff4025-f615-463e-9f6a-ffcf9109630c has hostIP: 172.18.0.15 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:38:10.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7137" for this suite. • [SLOW TEST:52.653 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2343,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:38:10.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:38:10.782: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 7 08:38:10.788: INFO: Number of nodes with available pods: 0 Sep 7 08:38:10.788: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 7 08:38:10.864: INFO: Number of nodes with available pods: 0 Sep 7 08:38:10.864: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:11.868: INFO: Number of nodes with available pods: 0 Sep 7 08:38:11.868: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:12.952: INFO: Number of nodes with available pods: 0 Sep 7 08:38:12.952: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:13.868: INFO: Number of nodes with available pods: 0 Sep 7 08:38:13.868: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:14.867: INFO: Number of nodes with available pods: 0 Sep 7 08:38:14.867: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:18.025: INFO: Number of nodes with available pods: 0 Sep 7 08:38:18.026: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:19.027: INFO: Number of nodes with available pods: 0 Sep 7 08:38:19.027: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:19.868: INFO: Number of nodes with available pods: 0 Sep 7 08:38:19.868: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:21.145: INFO: Number of nodes with available pods: 1 Sep 7 08:38:21.145: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 7 08:38:21.616: INFO: Number of nodes with available pods: 1 Sep 7 08:38:21.616: INFO: Number of running nodes: 0, number of available pods: 1 Sep 7 08:38:22.620: INFO: Number of nodes with available pods: 0 Sep 7 08:38:22.620: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 7 08:38:23.395: INFO: Number of nodes with available pods: 0 Sep 7 08:38:23.395: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:24.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:24.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:25.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:25.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:26.582: INFO: Number of nodes with available pods: 0 Sep 7 08:38:26.582: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:27.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:27.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:28.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:28.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:29.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:29.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:30.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:30.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:31.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:31.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:32.448: INFO: Number of nodes with available pods: 0 Sep 7 08:38:32.448: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:33.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:33.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:34.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:34.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:35.413: INFO: Number of nodes with available pods: 0 Sep 7 08:38:35.413: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:36.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:36.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:37.721: INFO: Number of nodes with available pods: 0 Sep 7 08:38:37.721: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:38.441: INFO: Number of nodes with available pods: 0 Sep 7 08:38:38.441: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:39.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:39.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:40.974: INFO: Number of nodes with available pods: 0 Sep 7 08:38:40.974: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:41.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:41.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:42.438: INFO: Number of nodes with available pods: 0 Sep 7 08:38:42.438: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:43.552: INFO: Number of nodes with available pods: 0 Sep 7 08:38:43.552: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:44.647: INFO: Number of nodes with available pods: 0 Sep 7 08:38:44.647: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:45.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:45.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:46.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:46.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:47.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:47.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:49.277: INFO: Number of nodes with available pods: 0 Sep 7 08:38:49.277: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:49.542: INFO: Number of nodes with available pods: 0 Sep 7 08:38:49.542: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:50.399: INFO: Number of nodes with available pods: 0 Sep 7 08:38:50.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:51.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:51.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:53.140: INFO: Number of nodes with available pods: 0 Sep 7 08:38:53.140: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:53.552: INFO: Number of nodes with available pods: 0 Sep 7 08:38:53.552: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:54.870: INFO: Number of nodes with available pods: 0 Sep 7 08:38:54.870: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:55.493: INFO: Number of nodes with available pods: 0 Sep 7 08:38:55.493: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:57.049: INFO: Number of nodes with available pods: 0 Sep 7 08:38:57.049: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:57.398: INFO: Number of nodes with available pods: 0 Sep 7 08:38:57.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:58.720: INFO: Number of nodes with available pods: 0 Sep 7 08:38:58.720: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:38:59.404: INFO: Number of nodes with available pods: 0 Sep 7 08:38:59.404: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:01.006: INFO: Number of nodes with available pods: 0 Sep 7 08:39:01.007: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:01.399: INFO: Number of nodes with available pods: 0 Sep 7 08:39:01.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:02.480: INFO: Number of nodes with available pods: 0 Sep 7 08:39:02.480: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:03.445: INFO: Number of nodes with available pods: 0 Sep 7 08:39:03.445: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:04.398: INFO: Number of nodes with available pods: 0 Sep 7 08:39:04.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:05.398: INFO: Number of nodes with available pods: 0 Sep 7 08:39:05.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:06.398: INFO: Number of nodes with available pods: 0 Sep 7 08:39:06.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:07.445: INFO: Number of nodes with available pods: 0 Sep 7 08:39:07.445: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:08.399: INFO: Number of nodes with available pods: 0 Sep 7 08:39:08.399: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:09.398: INFO: Number of nodes with available pods: 0 Sep 7 08:39:09.398: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:10.690: INFO: Number of nodes with available pods: 0 Sep 7 08:39:10.690: INFO: Node latest-worker is running more than one daemon pod Sep 7 08:39:11.398: INFO: Number of nodes with available pods: 1 Sep 7 08:39:11.398: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7609, will wait for the garbage collector to delete the pods Sep 7 08:39:11.460: INFO: Deleting DaemonSet.extensions daemon-set took: 4.818093ms Sep 7 08:39:11.860: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.151735ms Sep 7 08:39:32.283: INFO: Number of nodes with available pods: 0 Sep 7 08:39:32.283: INFO: Number of running nodes: 0, number of available pods: 0 Sep 7 08:39:32.285: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7609/daemonsets","resourceVersion":"287373"},"items":null} Sep 7 08:39:32.287: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7609/pods","resourceVersion":"287373"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:39:32.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7609" for this suite. • [SLOW TEST:81.600 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":159,"skipped":2350,"failed":0} SSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:39:32.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 7 08:39:32.397: INFO: created test-event-1 Sep 7 08:39:32.413: INFO: created test-event-2 Sep 7 08:39:32.457: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 7 08:39:32.467: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 7 08:39:32.480: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:39:32.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8756" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":160,"skipped":2353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:39:32.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6397 STEP: creating service affinity-nodeport in namespace services-6397 STEP: creating replication controller affinity-nodeport in namespace services-6397 I0907 08:39:32.606309 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6397, replica count: 3 I0907 08:39:35.656619 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:38.656804 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:41.656985 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:44.657298 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:47.657510 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:50.657749 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:53.657882 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:56.658085 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:39:59.658360 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:02.658606 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:05.658774 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:08.658924 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:11.659149 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:14.659346 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:17.659519 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:20.660142 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:23.660369 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:26.660606 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:29.660858 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:32.661062 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:40:35.661266 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:40:35.670: INFO: Creating new exec pod Sep 7 08:40:54.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6397 execpod-affinitypj56x -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 7 08:40:58.232: INFO: stderr: "I0907 08:40:58.154459 2012 log.go:181] (0xc00003abb0) (0xc0006cc3c0) Create stream\nI0907 08:40:58.154520 2012 log.go:181] (0xc00003abb0) (0xc0006cc3c0) Stream added, broadcasting: 1\nI0907 08:40:58.156275 2012 log.go:181] (0xc00003abb0) Reply frame received for 1\nI0907 08:40:58.156307 2012 log.go:181] (0xc00003abb0) (0xc0005c6000) Create stream\nI0907 08:40:58.156316 2012 log.go:181] (0xc00003abb0) (0xc0005c6000) Stream added, broadcasting: 3\nI0907 08:40:58.157077 2012 log.go:181] (0xc00003abb0) Reply frame received for 3\nI0907 08:40:58.157130 2012 log.go:181] (0xc00003abb0) (0xc000e84000) Create stream\nI0907 08:40:58.157162 2012 log.go:181] (0xc00003abb0) (0xc000e84000) Stream added, broadcasting: 5\nI0907 08:40:58.157907 2012 log.go:181] (0xc00003abb0) Reply frame received for 5\nI0907 08:40:58.227602 2012 log.go:181] (0xc00003abb0) Data frame received for 5\nI0907 08:40:58.227635 2012 log.go:181] (0xc000e84000) (5) Data frame handling\nI0907 08:40:58.227653 2012 log.go:181] (0xc000e84000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0907 08:40:58.227873 2012 log.go:181] (0xc00003abb0) Data frame received for 5\nI0907 08:40:58.227907 2012 log.go:181] (0xc000e84000) (5) Data frame handling\nI0907 08:40:58.227937 2012 log.go:181] (0xc000e84000) (5) Data frame sent\nI0907 08:40:58.227952 2012 log.go:181] (0xc00003abb0) Data frame received for 5\nI0907 08:40:58.227962 2012 log.go:181] (0xc000e84000) (5) Data frame handling\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0907 08:40:58.228264 2012 log.go:181] (0xc00003abb0) Data frame received for 3\nI0907 08:40:58.228275 2012 log.go:181] (0xc0005c6000) (3) Data frame handling\nI0907 08:40:58.229583 2012 log.go:181] (0xc00003abb0) Data frame received for 1\nI0907 08:40:58.229596 2012 log.go:181] (0xc0006cc3c0) (1) Data frame handling\nI0907 08:40:58.229608 2012 log.go:181] (0xc0006cc3c0) (1) Data frame sent\nI0907 08:40:58.229618 2012 log.go:181] (0xc00003abb0) (0xc0006cc3c0) Stream removed, broadcasting: 1\nI0907 08:40:58.229630 2012 log.go:181] (0xc00003abb0) Go away received\nI0907 08:40:58.229965 2012 log.go:181] (0xc00003abb0) (0xc0006cc3c0) Stream removed, broadcasting: 1\nI0907 08:40:58.229980 2012 log.go:181] (0xc00003abb0) (0xc0005c6000) Stream removed, broadcasting: 3\nI0907 08:40:58.229988 2012 log.go:181] (0xc00003abb0) (0xc000e84000) Stream removed, broadcasting: 5\n" Sep 7 08:40:58.233: INFO: stdout: "" Sep 7 08:40:58.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6397 execpod-affinitypj56x -- /bin/sh -x -c nc -zv -t -w 2 10.109.123.154 80' Sep 7 08:40:58.416: INFO: stderr: "I0907 08:40:58.350454 2030 log.go:181] (0xc0009cadc0) (0xc000036460) Create stream\nI0907 08:40:58.350492 2030 log.go:181] (0xc0009cadc0) (0xc000036460) Stream added, broadcasting: 1\nI0907 08:40:58.354943 2030 log.go:181] (0xc0009cadc0) Reply frame received for 1\nI0907 08:40:58.354972 2030 log.go:181] (0xc0009cadc0) (0xc000a3b4a0) Create stream\nI0907 08:40:58.354980 2030 log.go:181] (0xc0009cadc0) (0xc000a3b4a0) Stream added, broadcasting: 3\nI0907 08:40:58.355544 2030 log.go:181] (0xc0009cadc0) Reply frame received for 3\nI0907 08:40:58.355566 2030 log.go:181] (0xc0009cadc0) (0xc0000366e0) Create stream\nI0907 08:40:58.355574 2030 log.go:181] (0xc0009cadc0) (0xc0000366e0) Stream added, broadcasting: 5\nI0907 08:40:58.356277 2030 log.go:181] (0xc0009cadc0) Reply frame received for 5\nI0907 08:40:58.410292 2030 log.go:181] (0xc0009cadc0) Data frame received for 3\nI0907 08:40:58.410329 2030 log.go:181] (0xc000a3b4a0) (3) Data frame handling\nI0907 08:40:58.410354 2030 log.go:181] (0xc0009cadc0) Data frame received for 5\nI0907 08:40:58.410365 2030 log.go:181] (0xc0000366e0) (5) Data frame handling\nI0907 08:40:58.410377 2030 log.go:181] (0xc0000366e0) (5) Data frame sent\nI0907 08:40:58.410388 2030 log.go:181] (0xc0009cadc0) Data frame received for 5\n+ nc -zv -t -w 2 10.109.123.154 80\nConnection to 10.109.123.154 80 port [tcp/http] succeeded!\nI0907 08:40:58.410434 2030 log.go:181] (0xc0000366e0) (5) Data frame handling\nI0907 08:40:58.411617 2030 log.go:181] (0xc0009cadc0) Data frame received for 1\nI0907 08:40:58.411633 2030 log.go:181] (0xc000036460) (1) Data frame handling\nI0907 08:40:58.411641 2030 log.go:181] (0xc000036460) (1) Data frame sent\nI0907 08:40:58.411812 2030 log.go:181] (0xc0009cadc0) (0xc000036460) Stream removed, broadcasting: 1\nI0907 08:40:58.411844 2030 log.go:181] (0xc0009cadc0) Go away received\nI0907 08:40:58.412275 2030 log.go:181] (0xc0009cadc0) (0xc000036460) Stream removed, broadcasting: 1\nI0907 08:40:58.412296 2030 log.go:181] (0xc0009cadc0) (0xc000a3b4a0) Stream removed, broadcasting: 3\nI0907 08:40:58.412308 2030 log.go:181] (0xc0009cadc0) (0xc0000366e0) Stream removed, broadcasting: 5\n" Sep 7 08:40:58.416: INFO: stdout: "" Sep 7 08:40:58.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6397 execpod-affinitypj56x -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30429' Sep 7 08:40:58.596: INFO: stderr: "I0907 08:40:58.533983 2048 log.go:181] (0xc0008d91e0) (0xc0008d0780) Create stream\nI0907 08:40:58.534017 2048 log.go:181] (0xc0008d91e0) (0xc0008d0780) Stream added, broadcasting: 1\nI0907 08:40:58.535824 2048 log.go:181] (0xc0008d91e0) Reply frame received for 1\nI0907 08:40:58.535850 2048 log.go:181] (0xc0008d91e0) (0xc0005a2280) Create stream\nI0907 08:40:58.535860 2048 log.go:181] (0xc0008d91e0) (0xc0005a2280) Stream added, broadcasting: 3\nI0907 08:40:58.536607 2048 log.go:181] (0xc0008d91e0) Reply frame received for 3\nI0907 08:40:58.536658 2048 log.go:181] (0xc0008d91e0) (0xc0009bc5a0) Create stream\nI0907 08:40:58.536687 2048 log.go:181] (0xc0008d91e0) (0xc0009bc5a0) Stream added, broadcasting: 5\nI0907 08:40:58.537380 2048 log.go:181] (0xc0008d91e0) Reply frame received for 5\nI0907 08:40:58.590886 2048 log.go:181] (0xc0008d91e0) Data frame received for 5\nI0907 08:40:58.590907 2048 log.go:181] (0xc0009bc5a0) (5) Data frame handling\nI0907 08:40:58.590917 2048 log.go:181] (0xc0009bc5a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30429\nI0907 08:40:58.591619 2048 log.go:181] (0xc0008d91e0) Data frame received for 5\nI0907 08:40:58.591633 2048 log.go:181] (0xc0009bc5a0) (5) Data frame handling\nI0907 08:40:58.591641 2048 log.go:181] (0xc0009bc5a0) (5) Data frame sent\nConnection to 172.18.0.15 30429 port [tcp/30429] succeeded!\nI0907 08:40:58.591984 2048 log.go:181] (0xc0008d91e0) Data frame received for 3\nI0907 08:40:58.592095 2048 log.go:181] (0xc0005a2280) (3) Data frame handling\nI0907 08:40:58.592154 2048 log.go:181] (0xc0008d91e0) Data frame received for 5\nI0907 08:40:58.592171 2048 log.go:181] (0xc0009bc5a0) (5) Data frame handling\nI0907 08:40:58.593161 2048 log.go:181] (0xc0008d91e0) Data frame received for 1\nI0907 08:40:58.593175 2048 log.go:181] (0xc0008d0780) (1) Data frame handling\nI0907 08:40:58.593189 2048 log.go:181] (0xc0008d0780) (1) Data frame sent\nI0907 08:40:58.593201 2048 log.go:181] (0xc0008d91e0) (0xc0008d0780) Stream removed, broadcasting: 1\nI0907 08:40:58.593298 2048 log.go:181] (0xc0008d91e0) Go away received\nI0907 08:40:58.593463 2048 log.go:181] (0xc0008d91e0) (0xc0008d0780) Stream removed, broadcasting: 1\nI0907 08:40:58.593477 2048 log.go:181] (0xc0008d91e0) (0xc0005a2280) Stream removed, broadcasting: 3\nI0907 08:40:58.593486 2048 log.go:181] (0xc0008d91e0) (0xc0009bc5a0) Stream removed, broadcasting: 5\n" Sep 7 08:40:58.596: INFO: stdout: "" Sep 7 08:40:58.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6397 execpod-affinitypj56x -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30429' Sep 7 08:40:58.842: INFO: stderr: "I0907 08:40:58.777776 2065 log.go:181] (0xc00024c0b0) (0xc0008b34a0) Create stream\nI0907 08:40:58.777818 2065 log.go:181] (0xc00024c0b0) (0xc0008b34a0) Stream added, broadcasting: 1\nI0907 08:40:58.780487 2065 log.go:181] (0xc00024c0b0) Reply frame received for 1\nI0907 08:40:58.780512 2065 log.go:181] (0xc00024c0b0) (0xc000788460) Create stream\nI0907 08:40:58.780519 2065 log.go:181] (0xc00024c0b0) (0xc000788460) Stream added, broadcasting: 3\nI0907 08:40:58.781606 2065 log.go:181] (0xc00024c0b0) Reply frame received for 3\nI0907 08:40:58.781663 2065 log.go:181] (0xc00024c0b0) (0xc0004ec000) Create stream\nI0907 08:40:58.781683 2065 log.go:181] (0xc00024c0b0) (0xc0004ec000) Stream added, broadcasting: 5\nI0907 08:40:58.782562 2065 log.go:181] (0xc00024c0b0) Reply frame received for 5\nI0907 08:40:58.837628 2065 log.go:181] (0xc00024c0b0) Data frame received for 5\nI0907 08:40:58.837649 2065 log.go:181] (0xc0004ec000) (5) Data frame handling\nI0907 08:40:58.837670 2065 log.go:181] (0xc0004ec000) (5) Data frame sent\nI0907 08:40:58.837682 2065 log.go:181] (0xc00024c0b0) Data frame received for 5\nI0907 08:40:58.837688 2065 log.go:181] (0xc0004ec000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30429\nConnection to 172.18.0.14 30429 port [tcp/30429] succeeded!\nI0907 08:40:58.837704 2065 log.go:181] (0xc0004ec000) (5) Data frame sent\nI0907 08:40:58.837951 2065 log.go:181] (0xc00024c0b0) Data frame received for 5\nI0907 08:40:58.837963 2065 log.go:181] (0xc0004ec000) (5) Data frame handling\nI0907 08:40:58.837978 2065 log.go:181] (0xc00024c0b0) Data frame received for 3\nI0907 08:40:58.837984 2065 log.go:181] (0xc000788460) (3) Data frame handling\nI0907 08:40:58.838847 2065 log.go:181] (0xc00024c0b0) Data frame received for 1\nI0907 08:40:58.838856 2065 log.go:181] (0xc0008b34a0) (1) Data frame handling\nI0907 08:40:58.838865 2065 log.go:181] (0xc0008b34a0) (1) Data frame sent\nI0907 08:40:58.838874 2065 log.go:181] (0xc00024c0b0) (0xc0008b34a0) Stream removed, broadcasting: 1\nI0907 08:40:58.838882 2065 log.go:181] (0xc00024c0b0) Go away received\nI0907 08:40:58.839191 2065 log.go:181] (0xc00024c0b0) (0xc0008b34a0) Stream removed, broadcasting: 1\nI0907 08:40:58.839203 2065 log.go:181] (0xc00024c0b0) (0xc000788460) Stream removed, broadcasting: 3\nI0907 08:40:58.839209 2065 log.go:181] (0xc00024c0b0) (0xc0004ec000) Stream removed, broadcasting: 5\n" Sep 7 08:40:58.842: INFO: stdout: "" Sep 7 08:40:58.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6397 execpod-affinitypj56x -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:30429/ ; done' Sep 7 08:40:59.081: INFO: stderr: "I0907 08:40:58.954457 2083 log.go:181] (0xc000538000) (0xc000e26000) Create stream\nI0907 08:40:58.954494 2083 log.go:181] (0xc000538000) (0xc000e26000) Stream added, broadcasting: 1\nI0907 08:40:58.955512 2083 log.go:181] (0xc000538000) Reply frame received for 1\nI0907 08:40:58.955535 2083 log.go:181] (0xc000538000) (0xc0008fa500) Create stream\nI0907 08:40:58.955542 2083 log.go:181] (0xc000538000) (0xc0008fa500) Stream added, broadcasting: 3\nI0907 08:40:58.956136 2083 log.go:181] (0xc000538000) Reply frame received for 3\nI0907 08:40:58.956163 2083 log.go:181] (0xc000538000) (0xc000e260a0) Create stream\nI0907 08:40:58.956170 2083 log.go:181] (0xc000538000) (0xc000e260a0) Stream added, broadcasting: 5\nI0907 08:40:58.956684 2083 log.go:181] (0xc000538000) Reply frame received for 5\nI0907 08:40:59.015784 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.015805 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.015812 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.015824 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.015834 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.015850 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.017378 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.017390 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.017397 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.017710 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.017724 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.017737 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.017766 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.017774 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.017781 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.021180 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.021196 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.021207 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.021725 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.021743 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.021752 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.021764 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.021773 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.021782 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.025240 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.025267 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.025284 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.025627 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.025682 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.025725 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.025784 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.025793 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.025801 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.031486 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.031502 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.031509 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.031522 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.031527 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.031532 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.035391 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.035412 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.035425 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.035853 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.035949 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.035974 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.036329 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.036358 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.036373 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.038929 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.038946 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.038958 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.039382 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.039445 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.039487 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.039507 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.039517 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.039530 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.042547 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.042562 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.042582 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.042592 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.042596 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.042601 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.042610 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.042614 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.042618 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.046338 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.046357 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.046372 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.046607 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.046620 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.046634 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.046641 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.046649 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.046657 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.050119 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.050137 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.050145 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.050496 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.050505 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.050519 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.050525 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.050529 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.050533 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.054113 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.054128 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.054148 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.054500 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.054515 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.054524 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.054573 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.054590 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.054602 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.057901 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.057917 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.057930 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.058268 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.058317 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.058360 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.058422 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.058438 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.058450 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.061950 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.061968 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.061974 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.061981 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.061985 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.061989 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.065025 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.065038 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.065051 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.065507 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.065522 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.065532 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.065670 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.065683 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.065692 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.069098 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.069113 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.069126 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.069436 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.069455 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.069465 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.069477 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.069485 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.069492 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.072487 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.072504 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.072522 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.073004 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.073024 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.073049 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.073072 2083 log.go:181] (0xc000e260a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30429/\nI0907 08:40:59.073083 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.073090 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.077233 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.077249 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.077263 2083 log.go:181] (0xc0008fa500) (3) Data frame sent\nI0907 08:40:59.077769 2083 log.go:181] (0xc000538000) Data frame received for 3\nI0907 08:40:59.077785 2083 log.go:181] (0xc0008fa500) (3) Data frame handling\nI0907 08:40:59.077861 2083 log.go:181] (0xc000538000) Data frame received for 5\nI0907 08:40:59.077893 2083 log.go:181] (0xc000e260a0) (5) Data frame handling\nI0907 08:40:59.078918 2083 log.go:181] (0xc000538000) Data frame received for 1\nI0907 08:40:59.078939 2083 log.go:181] (0xc000e26000) (1) Data frame handling\nI0907 08:40:59.078948 2083 log.go:181] (0xc000e26000) (1) Data frame sent\nI0907 08:40:59.078955 2083 log.go:181] (0xc000538000) (0xc000e26000) Stream removed, broadcasting: 1\nI0907 08:40:59.078969 2083 log.go:181] (0xc000538000) Go away received\nI0907 08:40:59.079191 2083 log.go:181] (0xc000538000) (0xc000e26000) Stream removed, broadcasting: 1\nI0907 08:40:59.079201 2083 log.go:181] (0xc000538000) (0xc0008fa500) Stream removed, broadcasting: 3\nI0907 08:40:59.079207 2083 log.go:181] (0xc000538000) (0xc000e260a0) Stream removed, broadcasting: 5\n" Sep 7 08:40:59.082: INFO: stdout: "\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms\naffinity-nodeport-k5qms" Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Received response from host: affinity-nodeport-k5qms Sep 7 08:40:59.082: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6397, will wait for the garbage collector to delete the pods Sep 7 08:40:59.203: INFO: Deleting ReplicationController affinity-nodeport took: 5.703328ms Sep 7 08:41:00.303: INFO: Terminating ReplicationController affinity-nodeport pods took: 1.100202082s [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:41:52.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6397" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:139.888 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":161,"skipped":2380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:41:52.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:41:52.518: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 7 08:41:57.699: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 7 08:42:35.710: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 7 08:42:35.795: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8256 /apis/apps/v1/namespaces/deployment-8256/deployments/test-cleanup-deployment be54d12b-091e-4b71-a60d-c5d80ad14b18 287974 1 2020-09-07 08:42:35 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-09-07 08:42:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00377f2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Sep 7 08:42:35.806: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-8256 /apis/apps/v1/namespaces/deployment-8256/replicasets/test-cleanup-deployment-5d446bdd47 b26cd545-d1f0-4cad-aeb7-2641b8058910 287976 1 2020-09-07 08:42:35 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment be54d12b-091e-4b71-a60d-c5d80ad14b18 0xc000c03327 0xc000c03328}] [] [{kube-controller-manager Update apps/v1 2020-09-07 08:42:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be54d12b-091e-4b71-a60d-c5d80ad14b18\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000c033e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:42:35.806: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Sep 7 08:42:35.806: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8256 /apis/apps/v1/namespaces/deployment-8256/replicasets/test-cleanup-controller cafcfe83-0151-4dad-990a-e46df7fda5f8 287975 1 2020-09-07 08:41:52 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment be54d12b-091e-4b71-a60d-c5d80ad14b18 0xc000c030b7 0xc000c030b8}] [] [{e2e.test Update apps/v1 2020-09-07 08:41:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-07 08:42:35 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"be54d12b-091e-4b71-a60d-c5d80ad14b18\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000c03288 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 7 08:42:35.838: INFO: Pod "test-cleanup-controller-gjt5j" is available: &Pod{ObjectMeta:{test-cleanup-controller-gjt5j test-cleanup-controller- deployment-8256 /api/v1/namespaces/deployment-8256/pods/test-cleanup-controller-gjt5j 42cbeb51-336d-497e-a40d-2f2062c8a0a9 287971 0 2020-09-07 08:41:52 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller cafcfe83-0151-4dad-990a-e46df7fda5f8 0xc004200b17 0xc004200b18}] [] [{kube-controller-manager Update v1 2020-09-07 08:41:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cafcfe83-0151-4dad-990a-e46df7fda5f8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-07 08:42:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6dpv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6dpv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6dpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:41:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:42:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-07 08:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.121,StartTime:2020-09-07 08:41:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-07 08:42:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf4c0300deba4866532f9d6b4231fd4254affcec2d1fca85eeff33b88e899877,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:42:35.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8256" for this suite. • [SLOW TEST:43.560 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":162,"skipped":2405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:42:35.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-1143 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1143 to expose endpoints map[] Sep 7 08:42:36.205: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Sep 7 08:42:37.212: INFO: successfully validated that service multi-endpoint-test in namespace services-1143 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-1143 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1143 to expose endpoints map[pod1:[100]] Sep 7 08:42:41.388: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Sep 7 08:42:47.352: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Sep 7 08:42:50.849: INFO: successfully validated that service multi-endpoint-test in namespace services-1143 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-1143 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1143 to expose endpoints map[pod1:[100] pod2:[101]] Sep 7 08:42:55.351: INFO: Unexpected endpoints: found map[2cb86278-b92d-4508-8554-a4ce6b92387d:[100]], expected map[pod1:[100] pod2:[101]], will retry Sep 7 08:43:00.406: INFO: Unexpected endpoints: found map[2cb86278-b92d-4508-8554-a4ce6b92387d:[100]], expected map[pod1:[100] pod2:[101]], will retry Sep 7 08:43:05.353: INFO: successfully validated that service multi-endpoint-test in namespace services-1143 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-1143 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1143 to expose endpoints map[pod2:[101]] Sep 7 08:43:05.455: INFO: successfully validated that service multi-endpoint-test in namespace services-1143 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-1143 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1143 to expose endpoints map[] Sep 7 08:43:05.598: INFO: successfully validated that service multi-endpoint-test in namespace services-1143 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:43:05.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1143" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.839 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":163,"skipped":2433,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:43:06.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f9ca3bec-faad-4869-a01e-960140f65037 STEP: Creating a pod to test consume configMaps Sep 7 08:43:09.132: INFO: Waiting up to 5m0s for pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43" in namespace "configmap-2746" to be "Succeeded or Failed" Sep 7 08:43:09.902: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 770.002034ms Sep 7 08:43:12.697: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564727354s Sep 7 08:43:14.754: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 5.622029435s Sep 7 08:43:16.957: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 7.825573742s Sep 7 08:43:18.961: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 9.82938794s Sep 7 08:43:20.964: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 11.832351886s Sep 7 08:43:23.531: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 14.398780778s Sep 7 08:43:25.862: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 16.73027082s Sep 7 08:43:27.866: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 18.734483291s Sep 7 08:43:29.975: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 20.843638294s Sep 7 08:43:31.979: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 22.847148199s Sep 7 08:43:33.982: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 24.849707537s Sep 7 08:43:36.443: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 27.311475722s Sep 7 08:43:38.447: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 29.314755495s Sep 7 08:43:40.730: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 31.598085616s Sep 7 08:43:42.754: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 33.622302302s Sep 7 08:43:45.677: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 36.545278467s Sep 7 08:43:47.679: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.547371123s STEP: Saw pod success Sep 7 08:43:47.679: INFO: Pod "pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43" satisfied condition "Succeeded or Failed" Sep 7 08:43:47.681: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43 container configmap-volume-test: STEP: delete the pod Sep 7 08:43:48.068: INFO: Waiting for pod pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43 to disappear Sep 7 08:43:48.425: INFO: Pod pod-configmaps-e43a2eb4-a597-490a-b8f9-12c293d1fc43 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:43:48.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2746" for this suite. • [SLOW TEST:41.755 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2445,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:43:48.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 7 08:43:48.664: INFO: Waiting up to 5m0s for pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858" in namespace "var-expansion-8104" to be "Succeeded or Failed" Sep 7 08:43:48.719: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Pending", Reason="", readiness=false. Elapsed: 54.606932ms Sep 7 08:43:52.019: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Pending", Reason="", readiness=false. Elapsed: 3.354131761s Sep 7 08:43:54.060: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Pending", Reason="", readiness=false. Elapsed: 5.395332647s Sep 7 08:43:56.416: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Pending", Reason="", readiness=false. Elapsed: 7.751380255s Sep 7 08:43:58.461: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Running", Reason="", readiness=true. Elapsed: 9.796756421s Sep 7 08:44:00.464: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Running", Reason="", readiness=true. Elapsed: 11.799843763s Sep 7 08:44:03.880: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.216052371s STEP: Saw pod success Sep 7 08:44:03.881: INFO: Pod "var-expansion-93707199-738f-45a3-8a7f-366c53a56858" satisfied condition "Succeeded or Failed" Sep 7 08:44:04.306: INFO: Trying to get logs from node latest-worker2 pod var-expansion-93707199-738f-45a3-8a7f-366c53a56858 container dapi-container: STEP: delete the pod Sep 7 08:44:04.491: INFO: Waiting for pod var-expansion-93707199-738f-45a3-8a7f-366c53a56858 to disappear Sep 7 08:44:04.496: INFO: Pod var-expansion-93707199-738f-45a3-8a7f-366c53a56858 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:44:04.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8104" for this suite. • [SLOW TEST:15.969 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":165,"skipped":2458,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:44:04.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:44:04.644: INFO: Creating ReplicaSet my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e Sep 7 08:44:04.802: INFO: Pod name my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e: Found 0 pods out of 1 Sep 7 08:44:10.618: INFO: Pod name my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e: Found 1 pods out of 1 Sep 7 08:44:10.618: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e" is running Sep 7 08:44:32.625: INFO: Pod "my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e-sx9sx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:44:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:44:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:44:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-07 08:44:04 +0000 UTC Reason: Message:}]) Sep 7 08:44:32.625: INFO: Trying to dial the pod Sep 7 08:44:37.635: INFO: Controller my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e: Got expected result from replica 1 [my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e-sx9sx]: "my-hostname-basic-779cc8f4-5a4d-4b54-8e7e-604369183b7e-sx9sx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:44:37.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7261" for this suite. • [SLOW TEST:33.160 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":166,"skipped":2465,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:44:37.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:44:38.607: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:44:40.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:44:42.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065078, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:44:45.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:44:45.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9190" for this suite. STEP: Destroying namespace "webhook-9190-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.425 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":167,"skipped":2470,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:44:46.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-c16b2638-6607-4bfd-8ecd-b2f4f94766bb STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:44:52.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6169" for this suite. • [SLOW TEST:6.268 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:44:52.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2340 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2340 STEP: Creating statefulset with conflicting port in namespace statefulset-2340 STEP: Waiting until pod test-pod will start running in namespace statefulset-2340 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2340 Sep 7 08:44:56.523: INFO: Observed stateful pod in namespace: statefulset-2340, name: ss-0, uid: 689c1ef2-c885-4751-997a-cca4056e416e, status phase: Pending. Waiting for statefulset controller to delete. Sep 7 08:44:56.678: INFO: Observed stateful pod in namespace: statefulset-2340, name: ss-0, uid: 689c1ef2-c885-4751-997a-cca4056e416e, status phase: Failed. Waiting for statefulset controller to delete. Sep 7 08:44:56.720: INFO: Observed stateful pod in namespace: statefulset-2340, name: ss-0, uid: 689c1ef2-c885-4751-997a-cca4056e416e, status phase: Failed. Waiting for statefulset controller to delete. Sep 7 08:44:56.744: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2340 STEP: Removing pod with conflicting port in namespace statefulset-2340 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2340 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 7 08:45:00.883: INFO: Deleting all statefulset in ns statefulset-2340 Sep 7 08:45:00.886: INFO: Scaling statefulset ss to 0 Sep 7 08:45:10.930: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 08:45:10.934: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:45:11.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2340" for this suite. • [SLOW TEST:18.746 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":169,"skipped":2514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:45:11.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 7 08:45:23.290: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:23.290: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.332894 7 log.go:181] (0xc0004c9550) (0xc0038fb680) Create stream I0907 08:45:23.332950 7 log.go:181] (0xc0004c9550) (0xc0038fb680) Stream added, broadcasting: 1 I0907 08:45:23.339332 7 log.go:181] (0xc0004c9550) Reply frame received for 1 I0907 08:45:23.339395 7 log.go:181] (0xc0004c9550) (0xc0039c2820) Create stream I0907 08:45:23.339414 7 log.go:181] (0xc0004c9550) (0xc0039c2820) Stream added, broadcasting: 3 I0907 08:45:23.340491 7 log.go:181] (0xc0004c9550) Reply frame received for 3 I0907 08:45:23.340511 7 log.go:181] (0xc0004c9550) (0xc003ac1540) Create stream I0907 08:45:23.340520 7 log.go:181] (0xc0004c9550) (0xc003ac1540) Stream added, broadcasting: 5 I0907 08:45:23.341677 7 log.go:181] (0xc0004c9550) Reply frame received for 5 I0907 08:45:23.400922 7 log.go:181] (0xc0004c9550) Data frame received for 3 I0907 08:45:23.400968 7 log.go:181] (0xc0039c2820) (3) Data frame handling I0907 08:45:23.400983 7 log.go:181] (0xc0039c2820) (3) Data frame sent I0907 08:45:23.401014 7 log.go:181] (0xc0004c9550) Data frame received for 3 I0907 08:45:23.401035 7 log.go:181] (0xc0039c2820) (3) Data frame handling I0907 08:45:23.401083 7 log.go:181] (0xc0004c9550) Data frame received for 5 I0907 08:45:23.401143 7 log.go:181] (0xc003ac1540) (5) Data frame handling I0907 08:45:23.402803 7 log.go:181] (0xc0004c9550) Data frame received for 1 I0907 08:45:23.402826 7 log.go:181] (0xc0038fb680) (1) Data frame handling I0907 08:45:23.402845 7 log.go:181] (0xc0038fb680) (1) Data frame sent I0907 08:45:23.402863 7 log.go:181] (0xc0004c9550) (0xc0038fb680) Stream removed, broadcasting: 1 I0907 08:45:23.402972 7 log.go:181] (0xc0004c9550) (0xc0038fb680) Stream removed, broadcasting: 1 I0907 08:45:23.403007 7 log.go:181] (0xc0004c9550) (0xc0039c2820) Stream removed, broadcasting: 3 I0907 08:45:23.403020 7 log.go:181] (0xc0004c9550) (0xc003ac1540) Stream removed, broadcasting: 5 Sep 7 08:45:23.403: INFO: Exec stderr: "" Sep 7 08:45:23.403: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:23.403: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.403117 7 log.go:181] (0xc0004c9550) Go away received I0907 08:45:23.435480 7 log.go:181] (0xc005ebc9a0) (0xc003552460) Create stream I0907 08:45:23.435499 7 log.go:181] (0xc005ebc9a0) (0xc003552460) Stream added, broadcasting: 1 I0907 08:45:23.437670 7 log.go:181] (0xc005ebc9a0) Reply frame received for 1 I0907 08:45:23.437712 7 log.go:181] (0xc005ebc9a0) (0xc000ee8280) Create stream I0907 08:45:23.437726 7 log.go:181] (0xc005ebc9a0) (0xc000ee8280) Stream added, broadcasting: 3 I0907 08:45:23.438743 7 log.go:181] (0xc005ebc9a0) Reply frame received for 3 I0907 08:45:23.438779 7 log.go:181] (0xc005ebc9a0) (0xc0038fb720) Create stream I0907 08:45:23.438793 7 log.go:181] (0xc005ebc9a0) (0xc0038fb720) Stream added, broadcasting: 5 I0907 08:45:23.439613 7 log.go:181] (0xc005ebc9a0) Reply frame received for 5 I0907 08:45:23.498497 7 log.go:181] (0xc005ebc9a0) Data frame received for 5 I0907 08:45:23.498528 7 log.go:181] (0xc0038fb720) (5) Data frame handling I0907 08:45:23.498561 7 log.go:181] (0xc005ebc9a0) Data frame received for 3 I0907 08:45:23.498606 7 log.go:181] (0xc000ee8280) (3) Data frame handling I0907 08:45:23.498644 7 log.go:181] (0xc000ee8280) (3) Data frame sent I0907 08:45:23.498673 7 log.go:181] (0xc005ebc9a0) Data frame received for 3 I0907 08:45:23.498697 7 log.go:181] (0xc000ee8280) (3) Data frame handling I0907 08:45:23.500270 7 log.go:181] (0xc005ebc9a0) Data frame received for 1 I0907 08:45:23.500301 7 log.go:181] (0xc003552460) (1) Data frame handling I0907 08:45:23.500318 7 log.go:181] (0xc003552460) (1) Data frame sent I0907 08:45:23.500357 7 log.go:181] (0xc005ebc9a0) (0xc003552460) Stream removed, broadcasting: 1 I0907 08:45:23.500390 7 log.go:181] (0xc005ebc9a0) Go away received I0907 08:45:23.500497 7 log.go:181] (0xc005ebc9a0) (0xc003552460) Stream removed, broadcasting: 1 I0907 08:45:23.500525 7 log.go:181] (0xc005ebc9a0) (0xc000ee8280) Stream removed, broadcasting: 3 I0907 08:45:23.500539 7 log.go:181] (0xc005ebc9a0) (0xc0038fb720) Stream removed, broadcasting: 5 Sep 7 08:45:23.500: INFO: Exec stderr: "" Sep 7 08:45:23.500: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:23.500: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.531406 7 log.go:181] (0xc0006a5080) (0xc00074a5a0) Create stream I0907 08:45:23.531439 7 log.go:181] (0xc0006a5080) (0xc00074a5a0) Stream added, broadcasting: 1 I0907 08:45:23.534201 7 log.go:181] (0xc0006a5080) Reply frame received for 1 I0907 08:45:23.534252 7 log.go:181] (0xc0006a5080) (0xc003ac15e0) Create stream I0907 08:45:23.534267 7 log.go:181] (0xc0006a5080) (0xc003ac15e0) Stream added, broadcasting: 3 I0907 08:45:23.535146 7 log.go:181] (0xc0006a5080) Reply frame received for 3 I0907 08:45:23.535174 7 log.go:181] (0xc0006a5080) (0xc000ee83c0) Create stream I0907 08:45:23.535186 7 log.go:181] (0xc0006a5080) (0xc000ee83c0) Stream added, broadcasting: 5 I0907 08:45:23.536106 7 log.go:181] (0xc0006a5080) Reply frame received for 5 I0907 08:45:23.598501 7 log.go:181] (0xc0006a5080) Data frame received for 5 I0907 08:45:23.598567 7 log.go:181] (0xc000ee83c0) (5) Data frame handling I0907 08:45:23.598617 7 log.go:181] (0xc0006a5080) Data frame received for 3 I0907 08:45:23.598642 7 log.go:181] (0xc003ac15e0) (3) Data frame handling I0907 08:45:23.598680 7 log.go:181] (0xc003ac15e0) (3) Data frame sent I0907 08:45:23.598706 7 log.go:181] (0xc0006a5080) Data frame received for 3 I0907 08:45:23.598727 7 log.go:181] (0xc003ac15e0) (3) Data frame handling I0907 08:45:23.600485 7 log.go:181] (0xc0006a5080) Data frame received for 1 I0907 08:45:23.600521 7 log.go:181] (0xc00074a5a0) (1) Data frame handling I0907 08:45:23.600551 7 log.go:181] (0xc00074a5a0) (1) Data frame sent I0907 08:45:23.600575 7 log.go:181] (0xc0006a5080) (0xc00074a5a0) Stream removed, broadcasting: 1 I0907 08:45:23.600692 7 log.go:181] (0xc0006a5080) (0xc00074a5a0) Stream removed, broadcasting: 1 I0907 08:45:23.600724 7 log.go:181] (0xc0006a5080) (0xc003ac15e0) Stream removed, broadcasting: 3 I0907 08:45:23.600747 7 log.go:181] (0xc0006a5080) (0xc000ee83c0) Stream removed, broadcasting: 5 Sep 7 08:45:23.600: INFO: Exec stderr: "" Sep 7 08:45:23.600: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0907 08:45:23.600820 7 log.go:181] (0xc0006a5080) Go away received Sep 7 08:45:23.600: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.639340 7 log.go:181] (0xc0006420b0) (0xc00074a8c0) Create stream I0907 08:45:23.639364 7 log.go:181] (0xc0006420b0) (0xc00074a8c0) Stream added, broadcasting: 1 I0907 08:45:23.641701 7 log.go:181] (0xc0006420b0) Reply frame received for 1 I0907 08:45:23.641758 7 log.go:181] (0xc0006420b0) (0xc000ee8640) Create stream I0907 08:45:23.641783 7 log.go:181] (0xc0006420b0) (0xc000ee8640) Stream added, broadcasting: 3 I0907 08:45:23.642930 7 log.go:181] (0xc0006420b0) Reply frame received for 3 I0907 08:45:23.642965 7 log.go:181] (0xc0006420b0) (0xc00074a960) Create stream I0907 08:45:23.642972 7 log.go:181] (0xc0006420b0) (0xc00074a960) Stream added, broadcasting: 5 I0907 08:45:23.644543 7 log.go:181] (0xc0006420b0) Reply frame received for 5 I0907 08:45:23.696096 7 log.go:181] (0xc0006420b0) Data frame received for 5 I0907 08:45:23.696125 7 log.go:181] (0xc00074a960) (5) Data frame handling I0907 08:45:23.696192 7 log.go:181] (0xc0006420b0) Data frame received for 3 I0907 08:45:23.696239 7 log.go:181] (0xc000ee8640) (3) Data frame handling I0907 08:45:23.696263 7 log.go:181] (0xc000ee8640) (3) Data frame sent I0907 08:45:23.696275 7 log.go:181] (0xc0006420b0) Data frame received for 3 I0907 08:45:23.696286 7 log.go:181] (0xc000ee8640) (3) Data frame handling I0907 08:45:23.697589 7 log.go:181] (0xc0006420b0) Data frame received for 1 I0907 08:45:23.697619 7 log.go:181] (0xc00074a8c0) (1) Data frame handling I0907 08:45:23.697640 7 log.go:181] (0xc00074a8c0) (1) Data frame sent I0907 08:45:23.697656 7 log.go:181] (0xc0006420b0) (0xc00074a8c0) Stream removed, broadcasting: 1 I0907 08:45:23.697708 7 log.go:181] (0xc0006420b0) Go away received I0907 08:45:23.697738 7 log.go:181] (0xc0006420b0) (0xc00074a8c0) Stream removed, broadcasting: 1 I0907 08:45:23.697754 7 log.go:181] (0xc0006420b0) (0xc000ee8640) Stream removed, broadcasting: 3 I0907 08:45:23.697768 7 log.go:181] (0xc0006420b0) (0xc00074a960) Stream removed, broadcasting: 5 Sep 7 08:45:23.697: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 7 08:45:23.697: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:23.697: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.727481 7 log.go:181] (0xc0002ed760) (0xc000ee90e0) Create stream I0907 08:45:23.727512 7 log.go:181] (0xc0002ed760) (0xc000ee90e0) Stream added, broadcasting: 1 I0907 08:45:23.730023 7 log.go:181] (0xc0002ed760) Reply frame received for 1 I0907 08:45:23.730077 7 log.go:181] (0xc0002ed760) (0xc003ac1680) Create stream I0907 08:45:23.730098 7 log.go:181] (0xc0002ed760) (0xc003ac1680) Stream added, broadcasting: 3 I0907 08:45:23.730944 7 log.go:181] (0xc0002ed760) Reply frame received for 3 I0907 08:45:23.731004 7 log.go:181] (0xc0002ed760) (0xc00074aa00) Create stream I0907 08:45:23.731031 7 log.go:181] (0xc0002ed760) (0xc00074aa00) Stream added, broadcasting: 5 I0907 08:45:23.731807 7 log.go:181] (0xc0002ed760) Reply frame received for 5 I0907 08:45:23.794878 7 log.go:181] (0xc0002ed760) Data frame received for 5 I0907 08:45:23.794924 7 log.go:181] (0xc00074aa00) (5) Data frame handling I0907 08:45:23.794987 7 log.go:181] (0xc0002ed760) Data frame received for 3 I0907 08:45:23.795020 7 log.go:181] (0xc003ac1680) (3) Data frame handling I0907 08:45:23.795048 7 log.go:181] (0xc003ac1680) (3) Data frame sent I0907 08:45:23.795085 7 log.go:181] (0xc0002ed760) Data frame received for 3 I0907 08:45:23.795112 7 log.go:181] (0xc003ac1680) (3) Data frame handling I0907 08:45:23.796371 7 log.go:181] (0xc0002ed760) Data frame received for 1 I0907 08:45:23.796383 7 log.go:181] (0xc000ee90e0) (1) Data frame handling I0907 08:45:23.796390 7 log.go:181] (0xc000ee90e0) (1) Data frame sent I0907 08:45:23.796518 7 log.go:181] (0xc0002ed760) (0xc000ee90e0) Stream removed, broadcasting: 1 I0907 08:45:23.796542 7 log.go:181] (0xc0002ed760) Go away received I0907 08:45:23.796627 7 log.go:181] (0xc0002ed760) (0xc000ee90e0) Stream removed, broadcasting: 1 I0907 08:45:23.796661 7 log.go:181] (0xc0002ed760) (0xc003ac1680) Stream removed, broadcasting: 3 I0907 08:45:23.796685 7 log.go:181] (0xc0002ed760) (0xc00074aa00) Stream removed, broadcasting: 5 Sep 7 08:45:23.796: INFO: Exec stderr: "" Sep 7 08:45:23.796: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:23.796: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.831998 7 log.go:181] (0xc00097e000) (0xc003ac1900) Create stream I0907 08:45:23.832080 7 log.go:181] (0xc00097e000) (0xc003ac1900) Stream added, broadcasting: 1 I0907 08:45:23.836838 7 log.go:181] (0xc00097e000) Reply frame received for 1 I0907 08:45:23.836866 7 log.go:181] (0xc00097e000) (0xc00383c000) Create stream I0907 08:45:23.836875 7 log.go:181] (0xc00097e000) (0xc00383c000) Stream added, broadcasting: 3 I0907 08:45:23.838004 7 log.go:181] (0xc00097e000) Reply frame received for 3 I0907 08:45:23.838033 7 log.go:181] (0xc00097e000) (0xc003ac19a0) Create stream I0907 08:45:23.838045 7 log.go:181] (0xc00097e000) (0xc003ac19a0) Stream added, broadcasting: 5 I0907 08:45:23.839163 7 log.go:181] (0xc00097e000) Reply frame received for 5 I0907 08:45:23.895040 7 log.go:181] (0xc00097e000) Data frame received for 5 I0907 08:45:23.895114 7 log.go:181] (0xc003ac19a0) (5) Data frame handling I0907 08:45:23.895162 7 log.go:181] (0xc00097e000) Data frame received for 3 I0907 08:45:23.895191 7 log.go:181] (0xc00383c000) (3) Data frame handling I0907 08:45:23.895229 7 log.go:181] (0xc00383c000) (3) Data frame sent I0907 08:45:23.895260 7 log.go:181] (0xc00097e000) Data frame received for 3 I0907 08:45:23.895280 7 log.go:181] (0xc00383c000) (3) Data frame handling I0907 08:45:23.896771 7 log.go:181] (0xc00097e000) Data frame received for 1 I0907 08:45:23.896815 7 log.go:181] (0xc003ac1900) (1) Data frame handling I0907 08:45:23.897021 7 log.go:181] (0xc003ac1900) (1) Data frame sent I0907 08:45:23.897049 7 log.go:181] (0xc00097e000) (0xc003ac1900) Stream removed, broadcasting: 1 I0907 08:45:23.897068 7 log.go:181] (0xc00097e000) Go away received I0907 08:45:23.897147 7 log.go:181] (0xc00097e000) (0xc003ac1900) Stream removed, broadcasting: 1 I0907 08:45:23.897169 7 log.go:181] (0xc00097e000) (0xc00383c000) Stream removed, broadcasting: 3 I0907 08:45:23.897179 7 log.go:181] (0xc00097e000) (0xc003ac19a0) Stream removed, broadcasting: 5 Sep 7 08:45:23.897: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 7 08:45:23.897: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:23.897: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:23.930170 7 log.go:181] (0xc000642840) (0xc00074af00) Create stream I0907 08:45:23.930196 7 log.go:181] (0xc000642840) (0xc00074af00) Stream added, broadcasting: 1 I0907 08:45:23.932476 7 log.go:181] (0xc000642840) Reply frame received for 1 I0907 08:45:23.932521 7 log.go:181] (0xc000642840) (0xc000ee9220) Create stream I0907 08:45:23.932537 7 log.go:181] (0xc000642840) (0xc000ee9220) Stream added, broadcasting: 3 I0907 08:45:23.933647 7 log.go:181] (0xc000642840) Reply frame received for 3 I0907 08:45:23.933706 7 log.go:181] (0xc000642840) (0xc003ac1ae0) Create stream I0907 08:45:23.933722 7 log.go:181] (0xc000642840) (0xc003ac1ae0) Stream added, broadcasting: 5 I0907 08:45:23.934699 7 log.go:181] (0xc000642840) Reply frame received for 5 I0907 08:45:24.013733 7 log.go:181] (0xc000642840) Data frame received for 5 I0907 08:45:24.013763 7 log.go:181] (0xc003ac1ae0) (5) Data frame handling I0907 08:45:24.013789 7 log.go:181] (0xc000642840) Data frame received for 3 I0907 08:45:24.013827 7 log.go:181] (0xc000ee9220) (3) Data frame handling I0907 08:45:24.013861 7 log.go:181] (0xc000ee9220) (3) Data frame sent I0907 08:45:24.013918 7 log.go:181] (0xc000642840) Data frame received for 3 I0907 08:45:24.013943 7 log.go:181] (0xc000ee9220) (3) Data frame handling I0907 08:45:24.015911 7 log.go:181] (0xc000642840) Data frame received for 1 I0907 08:45:24.015937 7 log.go:181] (0xc00074af00) (1) Data frame handling I0907 08:45:24.015956 7 log.go:181] (0xc00074af00) (1) Data frame sent I0907 08:45:24.015977 7 log.go:181] (0xc000642840) (0xc00074af00) Stream removed, broadcasting: 1 I0907 08:45:24.016144 7 log.go:181] (0xc000642840) Go away received I0907 08:45:24.016193 7 log.go:181] (0xc000642840) (0xc00074af00) Stream removed, broadcasting: 1 I0907 08:45:24.016220 7 log.go:181] (0xc000642840) (0xc000ee9220) Stream removed, broadcasting: 3 I0907 08:45:24.016232 7 log.go:181] (0xc000642840) (0xc003ac1ae0) Stream removed, broadcasting: 5 Sep 7 08:45:24.016: INFO: Exec stderr: "" Sep 7 08:45:24.016: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:24.016: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:24.050432 7 log.go:181] (0xc000642f20) (0xc00074b400) Create stream I0907 08:45:24.050455 7 log.go:181] (0xc000642f20) (0xc00074b400) Stream added, broadcasting: 1 I0907 08:45:24.053149 7 log.go:181] (0xc000642f20) Reply frame received for 1 I0907 08:45:24.053196 7 log.go:181] (0xc000642f20) (0xc0035525a0) Create stream I0907 08:45:24.053207 7 log.go:181] (0xc000642f20) (0xc0035525a0) Stream added, broadcasting: 3 I0907 08:45:24.055354 7 log.go:181] (0xc000642f20) Reply frame received for 3 I0907 08:45:24.055399 7 log.go:181] (0xc000642f20) (0xc003552640) Create stream I0907 08:45:24.055417 7 log.go:181] (0xc000642f20) (0xc003552640) Stream added, broadcasting: 5 I0907 08:45:24.056929 7 log.go:181] (0xc000642f20) Reply frame received for 5 I0907 08:45:24.128423 7 log.go:181] (0xc000642f20) Data frame received for 3 I0907 08:45:24.128447 7 log.go:181] (0xc0035525a0) (3) Data frame handling I0907 08:45:24.128470 7 log.go:181] (0xc0035525a0) (3) Data frame sent I0907 08:45:24.128590 7 log.go:181] (0xc000642f20) Data frame received for 5 I0907 08:45:24.128606 7 log.go:181] (0xc003552640) (5) Data frame handling I0907 08:45:24.128629 7 log.go:181] (0xc000642f20) Data frame received for 3 I0907 08:45:24.128640 7 log.go:181] (0xc0035525a0) (3) Data frame handling I0907 08:45:24.129953 7 log.go:181] (0xc000642f20) Data frame received for 1 I0907 08:45:24.129979 7 log.go:181] (0xc00074b400) (1) Data frame handling I0907 08:45:24.129996 7 log.go:181] (0xc00074b400) (1) Data frame sent I0907 08:45:24.130095 7 log.go:181] (0xc000642f20) (0xc00074b400) Stream removed, broadcasting: 1 I0907 08:45:24.130116 7 log.go:181] (0xc000642f20) Go away received I0907 08:45:24.130200 7 log.go:181] (0xc000642f20) (0xc00074b400) Stream removed, broadcasting: 1 I0907 08:45:24.130221 7 log.go:181] (0xc000642f20) (0xc0035525a0) Stream removed, broadcasting: 3 I0907 08:45:24.130234 7 log.go:181] (0xc000642f20) (0xc003552640) Stream removed, broadcasting: 5 Sep 7 08:45:24.130: INFO: Exec stderr: "" Sep 7 08:45:24.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:24.130: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:24.166506 7 log.go:181] (0xc0002ede40) (0xc000ee9400) Create stream I0907 08:45:24.166537 7 log.go:181] (0xc0002ede40) (0xc000ee9400) Stream added, broadcasting: 1 I0907 08:45:24.168570 7 log.go:181] (0xc0002ede40) Reply frame received for 1 I0907 08:45:24.168607 7 log.go:181] (0xc0002ede40) (0xc000ee94a0) Create stream I0907 08:45:24.168683 7 log.go:181] (0xc0002ede40) (0xc000ee94a0) Stream added, broadcasting: 3 I0907 08:45:24.169696 7 log.go:181] (0xc0002ede40) Reply frame received for 3 I0907 08:45:24.169740 7 log.go:181] (0xc0002ede40) (0xc0035526e0) Create stream I0907 08:45:24.169775 7 log.go:181] (0xc0002ede40) (0xc0035526e0) Stream added, broadcasting: 5 I0907 08:45:24.170806 7 log.go:181] (0xc0002ede40) Reply frame received for 5 I0907 08:45:24.231891 7 log.go:181] (0xc0002ede40) Data frame received for 5 I0907 08:45:24.231944 7 log.go:181] (0xc0035526e0) (5) Data frame handling I0907 08:45:24.231981 7 log.go:181] (0xc0002ede40) Data frame received for 3 I0907 08:45:24.231999 7 log.go:181] (0xc000ee94a0) (3) Data frame handling I0907 08:45:24.232093 7 log.go:181] (0xc000ee94a0) (3) Data frame sent I0907 08:45:24.232113 7 log.go:181] (0xc0002ede40) Data frame received for 3 I0907 08:45:24.232129 7 log.go:181] (0xc000ee94a0) (3) Data frame handling I0907 08:45:24.233750 7 log.go:181] (0xc0002ede40) Data frame received for 1 I0907 08:45:24.233783 7 log.go:181] (0xc000ee9400) (1) Data frame handling I0907 08:45:24.233809 7 log.go:181] (0xc000ee9400) (1) Data frame sent I0907 08:45:24.233827 7 log.go:181] (0xc0002ede40) (0xc000ee9400) Stream removed, broadcasting: 1 I0907 08:45:24.233858 7 log.go:181] (0xc0002ede40) Go away received I0907 08:45:24.234007 7 log.go:181] (0xc0002ede40) (0xc000ee9400) Stream removed, broadcasting: 1 I0907 08:45:24.234037 7 log.go:181] (0xc0002ede40) (0xc000ee94a0) Stream removed, broadcasting: 3 I0907 08:45:24.234066 7 log.go:181] (0xc0002ede40) (0xc0035526e0) Stream removed, broadcasting: 5 Sep 7 08:45:24.234: INFO: Exec stderr: "" Sep 7 08:45:24.234: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7074 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:24.234: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:24.266140 7 log.go:181] (0xc005ebd080) (0xc003552960) Create stream I0907 08:45:24.266184 7 log.go:181] (0xc005ebd080) (0xc003552960) Stream added, broadcasting: 1 I0907 08:45:24.268890 7 log.go:181] (0xc005ebd080) Reply frame received for 1 I0907 08:45:24.268937 7 log.go:181] (0xc005ebd080) (0xc00383c0a0) Create stream I0907 08:45:24.268953 7 log.go:181] (0xc005ebd080) (0xc00383c0a0) Stream added, broadcasting: 3 I0907 08:45:24.270000 7 log.go:181] (0xc005ebd080) Reply frame received for 3 I0907 08:45:24.270040 7 log.go:181] (0xc005ebd080) (0xc00383c140) Create stream I0907 08:45:24.270055 7 log.go:181] (0xc005ebd080) (0xc00383c140) Stream added, broadcasting: 5 I0907 08:45:24.271242 7 log.go:181] (0xc005ebd080) Reply frame received for 5 I0907 08:45:24.318531 7 log.go:181] (0xc005ebd080) Data frame received for 3 I0907 08:45:24.318566 7 log.go:181] (0xc00383c0a0) (3) Data frame handling I0907 08:45:24.318594 7 log.go:181] (0xc00383c0a0) (3) Data frame sent I0907 08:45:24.318614 7 log.go:181] (0xc005ebd080) Data frame received for 3 I0907 08:45:24.318635 7 log.go:181] (0xc00383c0a0) (3) Data frame handling I0907 08:45:24.318672 7 log.go:181] (0xc005ebd080) Data frame received for 5 I0907 08:45:24.318693 7 log.go:181] (0xc00383c140) (5) Data frame handling I0907 08:45:24.320558 7 log.go:181] (0xc005ebd080) Data frame received for 1 I0907 08:45:24.320574 7 log.go:181] (0xc003552960) (1) Data frame handling I0907 08:45:24.320583 7 log.go:181] (0xc003552960) (1) Data frame sent I0907 08:45:24.320858 7 log.go:181] (0xc005ebd080) (0xc003552960) Stream removed, broadcasting: 1 I0907 08:45:24.320972 7 log.go:181] (0xc005ebd080) Go away received I0907 08:45:24.321074 7 log.go:181] (0xc005ebd080) (0xc003552960) Stream removed, broadcasting: 1 I0907 08:45:24.321101 7 log.go:181] (0xc005ebd080) (0xc00383c0a0) Stream removed, broadcasting: 3 I0907 08:45:24.321113 7 log.go:181] (0xc005ebd080) (0xc00383c140) Stream removed, broadcasting: 5 Sep 7 08:45:24.321: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:45:24.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7074" for this suite. • [SLOW TEST:13.228 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":2545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:45:24.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9158 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 7 08:45:24.384: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 7 08:45:24.469: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:45:26.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:45:28.472: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:45:30.493: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:45:32.473: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:45:34.473: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:45:36.473: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 08:45:38.473: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 7 08:45:38.478: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 7 08:45:40.483: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 7 08:45:42.483: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 7 08:45:44.483: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 7 08:45:46.482: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 7 08:45:48.483: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 7 08:45:52.559: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9158 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:52.559: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:52.594200 7 log.go:181] (0xc000d88e70) (0xc0037b5b80) Create stream I0907 08:45:52.594229 7 log.go:181] (0xc000d88e70) (0xc0037b5b80) Stream added, broadcasting: 1 I0907 08:45:52.596548 7 log.go:181] (0xc000d88e70) Reply frame received for 1 I0907 08:45:52.596591 7 log.go:181] (0xc000d88e70) (0xc00383c1e0) Create stream I0907 08:45:52.596606 7 log.go:181] (0xc000d88e70) (0xc00383c1e0) Stream added, broadcasting: 3 I0907 08:45:52.597615 7 log.go:181] (0xc000d88e70) Reply frame received for 3 I0907 08:45:52.597659 7 log.go:181] (0xc000d88e70) (0xc0024b8500) Create stream I0907 08:45:52.597681 7 log.go:181] (0xc000d88e70) (0xc0024b8500) Stream added, broadcasting: 5 I0907 08:45:52.598581 7 log.go:181] (0xc000d88e70) Reply frame received for 5 I0907 08:45:53.653017 7 log.go:181] (0xc000d88e70) Data frame received for 3 I0907 08:45:53.653064 7 log.go:181] (0xc00383c1e0) (3) Data frame handling I0907 08:45:53.653084 7 log.go:181] (0xc00383c1e0) (3) Data frame sent I0907 08:45:53.653098 7 log.go:181] (0xc000d88e70) Data frame received for 3 I0907 08:45:53.653132 7 log.go:181] (0xc00383c1e0) (3) Data frame handling I0907 08:45:53.653154 7 log.go:181] (0xc000d88e70) Data frame received for 5 I0907 08:45:53.653170 7 log.go:181] (0xc0024b8500) (5) Data frame handling I0907 08:45:53.655069 7 log.go:181] (0xc000d88e70) Data frame received for 1 I0907 08:45:53.655100 7 log.go:181] (0xc0037b5b80) (1) Data frame handling I0907 08:45:53.655131 7 log.go:181] (0xc0037b5b80) (1) Data frame sent I0907 08:45:53.655151 7 log.go:181] (0xc000d88e70) (0xc0037b5b80) Stream removed, broadcasting: 1 I0907 08:45:53.655227 7 log.go:181] (0xc000d88e70) Go away received I0907 08:45:53.655291 7 log.go:181] (0xc000d88e70) (0xc0037b5b80) Stream removed, broadcasting: 1 I0907 08:45:53.655318 7 log.go:181] (0xc000d88e70) (0xc00383c1e0) Stream removed, broadcasting: 3 I0907 08:45:53.655328 7 log.go:181] (0xc000d88e70) (0xc0024b8500) Stream removed, broadcasting: 5 Sep 7 08:45:53.655: INFO: Found all expected endpoints: [netserver-0] Sep 7 08:45:53.658: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.108 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9158 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:45:53.658: INFO: >>> kubeConfig: /root/.kube/config I0907 08:45:53.694359 7 log.go:181] (0xc0004c98c0) (0xc00383c3c0) Create stream I0907 08:45:53.694390 7 log.go:181] (0xc0004c98c0) (0xc00383c3c0) Stream added, broadcasting: 1 I0907 08:45:53.700265 7 log.go:181] (0xc0004c98c0) Reply frame received for 1 I0907 08:45:53.700324 7 log.go:181] (0xc0004c98c0) (0xc0005b3c20) Create stream I0907 08:45:53.700344 7 log.go:181] (0xc0004c98c0) (0xc0005b3c20) Stream added, broadcasting: 3 I0907 08:45:53.701438 7 log.go:181] (0xc0004c98c0) Reply frame received for 3 I0907 08:45:53.701485 7 log.go:181] (0xc0004c98c0) (0xc0005b3d60) Create stream I0907 08:45:53.701501 7 log.go:181] (0xc0004c98c0) (0xc0005b3d60) Stream added, broadcasting: 5 I0907 08:45:53.702383 7 log.go:181] (0xc0004c98c0) Reply frame received for 5 I0907 08:45:54.751271 7 log.go:181] (0xc0004c98c0) Data frame received for 5 I0907 08:45:54.751317 7 log.go:181] (0xc0005b3d60) (5) Data frame handling I0907 08:45:54.751378 7 log.go:181] (0xc0004c98c0) Data frame received for 3 I0907 08:45:54.751437 7 log.go:181] (0xc0005b3c20) (3) Data frame handling I0907 08:45:54.751471 7 log.go:181] (0xc0005b3c20) (3) Data frame sent I0907 08:45:54.751497 7 log.go:181] (0xc0004c98c0) Data frame received for 3 I0907 08:45:54.751514 7 log.go:181] (0xc0005b3c20) (3) Data frame handling I0907 08:45:54.753325 7 log.go:181] (0xc0004c98c0) Data frame received for 1 I0907 08:45:54.753345 7 log.go:181] (0xc00383c3c0) (1) Data frame handling I0907 08:45:54.753354 7 log.go:181] (0xc00383c3c0) (1) Data frame sent I0907 08:45:54.753368 7 log.go:181] (0xc0004c98c0) (0xc00383c3c0) Stream removed, broadcasting: 1 I0907 08:45:54.753387 7 log.go:181] (0xc0004c98c0) Go away received I0907 08:45:54.753568 7 log.go:181] (0xc0004c98c0) (0xc00383c3c0) Stream removed, broadcasting: 1 I0907 08:45:54.753604 7 log.go:181] (0xc0004c98c0) (0xc0005b3c20) Stream removed, broadcasting: 3 I0907 08:45:54.753620 7 log.go:181] (0xc0004c98c0) (0xc0005b3d60) Stream removed, broadcasting: 5 Sep 7 08:45:54.753: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:45:54.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9158" for this suite. • [SLOW TEST:30.432 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2574,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:45:54.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6491 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6491 I0907 08:45:55.564769 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6491, replica count: 2 I0907 08:45:58.615283 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:46:01.615511 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:46:01.615: INFO: Creating new exec pod Sep 7 08:46:07.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6491 execpodhkzsg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 7 08:46:07.326: INFO: stderr: "I0907 08:46:07.219795 2101 log.go:181] (0xc000866fd0) (0xc0004797c0) Create stream\nI0907 08:46:07.219878 2101 log.go:181] (0xc000866fd0) (0xc0004797c0) Stream added, broadcasting: 1\nI0907 08:46:07.225370 2101 log.go:181] (0xc000866fd0) Reply frame received for 1\nI0907 08:46:07.225422 2101 log.go:181] (0xc000866fd0) (0xc000478280) Create stream\nI0907 08:46:07.225438 2101 log.go:181] (0xc000866fd0) (0xc000478280) Stream added, broadcasting: 3\nI0907 08:46:07.226503 2101 log.go:181] (0xc000866fd0) Reply frame received for 3\nI0907 08:46:07.226530 2101 log.go:181] (0xc000866fd0) (0xc0004c1f40) Create stream\nI0907 08:46:07.226537 2101 log.go:181] (0xc000866fd0) (0xc0004c1f40) Stream added, broadcasting: 5\nI0907 08:46:07.227472 2101 log.go:181] (0xc000866fd0) Reply frame received for 5\nI0907 08:46:07.318229 2101 log.go:181] (0xc000866fd0) Data frame received for 5\nI0907 08:46:07.318262 2101 log.go:181] (0xc0004c1f40) (5) Data frame handling\nI0907 08:46:07.318282 2101 log.go:181] (0xc0004c1f40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0907 08:46:07.319049 2101 log.go:181] (0xc000866fd0) Data frame received for 5\nI0907 08:46:07.319072 2101 log.go:181] (0xc0004c1f40) (5) Data frame handling\nI0907 08:46:07.319094 2101 log.go:181] (0xc0004c1f40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0907 08:46:07.319107 2101 log.go:181] (0xc000866fd0) Data frame received for 5\nI0907 08:46:07.319144 2101 log.go:181] (0xc0004c1f40) (5) Data frame handling\nI0907 08:46:07.319227 2101 log.go:181] (0xc000866fd0) Data frame received for 3\nI0907 08:46:07.319270 2101 log.go:181] (0xc000478280) (3) Data frame handling\nI0907 08:46:07.321189 2101 log.go:181] (0xc000866fd0) Data frame received for 1\nI0907 08:46:07.321258 2101 log.go:181] (0xc0004797c0) (1) Data frame handling\nI0907 08:46:07.321286 2101 log.go:181] (0xc0004797c0) (1) Data frame sent\nI0907 08:46:07.321301 2101 log.go:181] (0xc000866fd0) (0xc0004797c0) Stream removed, broadcasting: 1\nI0907 08:46:07.321312 2101 log.go:181] (0xc000866fd0) Go away received\nI0907 08:46:07.321860 2101 log.go:181] (0xc000866fd0) (0xc0004797c0) Stream removed, broadcasting: 1\nI0907 08:46:07.321882 2101 log.go:181] (0xc000866fd0) (0xc000478280) Stream removed, broadcasting: 3\nI0907 08:46:07.321893 2101 log.go:181] (0xc000866fd0) (0xc0004c1f40) Stream removed, broadcasting: 5\n" Sep 7 08:46:07.326: INFO: stdout: "" Sep 7 08:46:07.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6491 execpodhkzsg -- /bin/sh -x -c nc -zv -t -w 2 10.98.77.0 80' Sep 7 08:46:07.518: INFO: stderr: "I0907 08:46:07.444396 2119 log.go:181] (0xc00002fad0) (0xc00056ebe0) Create stream\nI0907 08:46:07.444447 2119 log.go:181] (0xc00002fad0) (0xc00056ebe0) Stream added, broadcasting: 1\nI0907 08:46:07.446491 2119 log.go:181] (0xc00002fad0) Reply frame received for 1\nI0907 08:46:07.446520 2119 log.go:181] (0xc00002fad0) (0xc000828500) Create stream\nI0907 08:46:07.446536 2119 log.go:181] (0xc00002fad0) (0xc000828500) Stream added, broadcasting: 3\nI0907 08:46:07.447382 2119 log.go:181] (0xc00002fad0) Reply frame received for 3\nI0907 08:46:07.447410 2119 log.go:181] (0xc00002fad0) (0xc000730280) Create stream\nI0907 08:46:07.447418 2119 log.go:181] (0xc00002fad0) (0xc000730280) Stream added, broadcasting: 5\nI0907 08:46:07.448150 2119 log.go:181] (0xc00002fad0) Reply frame received for 5\nI0907 08:46:07.513095 2119 log.go:181] (0xc00002fad0) Data frame received for 3\nI0907 08:46:07.513132 2119 log.go:181] (0xc000828500) (3) Data frame handling\nI0907 08:46:07.513163 2119 log.go:181] (0xc00002fad0) Data frame received for 5\nI0907 08:46:07.513198 2119 log.go:181] (0xc000730280) (5) Data frame handling\nI0907 08:46:07.513220 2119 log.go:181] (0xc000730280) (5) Data frame sent\nI0907 08:46:07.513232 2119 log.go:181] (0xc00002fad0) Data frame received for 5\nI0907 08:46:07.513241 2119 log.go:181] (0xc000730280) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.77.0 80\nConnection to 10.98.77.0 80 port [tcp/http] succeeded!\nI0907 08:46:07.514530 2119 log.go:181] (0xc00002fad0) Data frame received for 1\nI0907 08:46:07.514548 2119 log.go:181] (0xc00056ebe0) (1) Data frame handling\nI0907 08:46:07.514560 2119 log.go:181] (0xc00056ebe0) (1) Data frame sent\nI0907 08:46:07.514574 2119 log.go:181] (0xc00002fad0) (0xc00056ebe0) Stream removed, broadcasting: 1\nI0907 08:46:07.514753 2119 log.go:181] (0xc00002fad0) Go away received\nI0907 08:46:07.515020 2119 log.go:181] (0xc00002fad0) (0xc00056ebe0) Stream removed, broadcasting: 1\nI0907 08:46:07.515037 2119 log.go:181] (0xc00002fad0) (0xc000828500) Stream removed, broadcasting: 3\nI0907 08:46:07.515047 2119 log.go:181] (0xc00002fad0) (0xc000730280) Stream removed, broadcasting: 5\n" Sep 7 08:46:07.518: INFO: stdout: "" Sep 7 08:46:07.518: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:07.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6491" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.812 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":172,"skipped":2574,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:07.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 7 08:46:08.278: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 7 08:46:10.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:46:12.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065168, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:46:15.326: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:46:15.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:16.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7609" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.072 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":173,"skipped":2586,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:16.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-bf7a7ba5-1728-4fca-bae3-4842e8a29c4a STEP: Creating secret with name s-test-opt-upd-8043d44f-bbf3-4a82-a580-ffe341a651e3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bf7a7ba5-1728-4fca-bae3-4842e8a29c4a STEP: Updating secret s-test-opt-upd-8043d44f-bbf3-4a82-a580-ffe341a651e3 STEP: Creating secret with name s-test-opt-create-f6f28d5c-ffa3-4cb4-9629-4e376148d6c9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:25.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4521" for this suite. • [SLOW TEST:8.391 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:25.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:46:25.890: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:46:27.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065185, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065185, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065185, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065185, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:46:30.998: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:31.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6329" for this suite. STEP: Destroying namespace "webhook-6329-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.569 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":175,"skipped":2644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:31.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 7 08:46:33.184: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 7 08:46:35.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065193, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065193, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065193, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065193, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:46:38.263: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:46:38.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:39.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8556" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.907 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":176,"skipped":2667,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:39.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:46:39.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc32af21-361e-483d-a714-69334767c423" in namespace "downward-api-8372" to be "Succeeded or Failed" Sep 7 08:46:39.706: INFO: Pod "downwardapi-volume-fc32af21-361e-483d-a714-69334767c423": Phase="Pending", Reason="", readiness=false. Elapsed: 2.63305ms Sep 7 08:46:41.710: INFO: Pod "downwardapi-volume-fc32af21-361e-483d-a714-69334767c423": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006066286s Sep 7 08:46:43.714: INFO: Pod "downwardapi-volume-fc32af21-361e-483d-a714-69334767c423": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010761259s STEP: Saw pod success Sep 7 08:46:43.715: INFO: Pod "downwardapi-volume-fc32af21-361e-483d-a714-69334767c423" satisfied condition "Succeeded or Failed" Sep 7 08:46:43.718: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fc32af21-361e-483d-a714-69334767c423 container client-container: STEP: delete the pod Sep 7 08:46:43.749: INFO: Waiting for pod downwardapi-volume-fc32af21-361e-483d-a714-69334767c423 to disappear Sep 7 08:46:43.780: INFO: Pod downwardapi-volume-fc32af21-361e-483d-a714-69334767c423 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:43.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8372" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2679,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:43.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:46:43.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b" in namespace "projected-1592" to be "Succeeded or Failed" Sep 7 08:46:43.934: INFO: Pod "downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.852742ms Sep 7 08:46:45.939: INFO: Pod "downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014996103s Sep 7 08:46:47.944: INFO: Pod "downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b": Phase="Running", Reason="", readiness=true. Elapsed: 4.019669848s Sep 7 08:46:49.948: INFO: Pod "downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024378763s STEP: Saw pod success Sep 7 08:46:49.948: INFO: Pod "downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b" satisfied condition "Succeeded or Failed" Sep 7 08:46:49.952: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b container client-container: STEP: delete the pod Sep 7 08:46:49.993: INFO: Waiting for pod downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b to disappear Sep 7 08:46:50.020: INFO: Pod downwardapi-volume-036f6a99-4570-4be5-8e45-11415f905a1b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:50.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1592" for this suite. • [SLOW TEST:6.241 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":178,"skipped":2694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:50.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:46:50.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:46:52.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065211, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065211, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065211, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065210, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:46:54.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065211, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065211, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065211, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065210, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:46:57.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:46:58.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2966-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:46:59.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8820" for this suite. STEP: Destroying namespace "webhook-8820-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.386 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":179,"skipped":2739,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:46:59.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 7 08:46:59.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-393' Sep 7 08:47:00.042: INFO: stderr: "" Sep 7 08:47:00.042: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 7 08:47:00.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-393' Sep 7 08:47:00.289: INFO: stderr: "" Sep 7 08:47:00.289: INFO: stdout: "update-demo-nautilus-f7cc7 update-demo-nautilus-sqt9p " Sep 7 08:47:00.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7cc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-393' Sep 7 08:47:00.444: INFO: stderr: "" Sep 7 08:47:00.444: INFO: stdout: "" Sep 7 08:47:00.444: INFO: update-demo-nautilus-f7cc7 is created but not running Sep 7 08:47:05.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-393' Sep 7 08:47:05.554: INFO: stderr: "" Sep 7 08:47:05.554: INFO: stdout: "update-demo-nautilus-f7cc7 update-demo-nautilus-sqt9p " Sep 7 08:47:05.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7cc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-393' Sep 7 08:47:05.661: INFO: stderr: "" Sep 7 08:47:05.661: INFO: stdout: "true" Sep 7 08:47:05.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7cc7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-393' Sep 7 08:47:05.761: INFO: stderr: "" Sep 7 08:47:05.761: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:47:05.761: INFO: validating pod update-demo-nautilus-f7cc7 Sep 7 08:47:05.765: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:47:05.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:47:05.765: INFO: update-demo-nautilus-f7cc7 is verified up and running Sep 7 08:47:05.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqt9p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-393' Sep 7 08:47:05.861: INFO: stderr: "" Sep 7 08:47:05.861: INFO: stdout: "true" Sep 7 08:47:05.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqt9p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-393' Sep 7 08:47:05.956: INFO: stderr: "" Sep 7 08:47:05.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 7 08:47:05.956: INFO: validating pod update-demo-nautilus-sqt9p Sep 7 08:47:05.960: INFO: got data: { "image": "nautilus.jpg" } Sep 7 08:47:05.960: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 7 08:47:05.960: INFO: update-demo-nautilus-sqt9p is verified up and running STEP: using delete to clean up resources Sep 7 08:47:05.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-393' Sep 7 08:47:06.072: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:47:06.072: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 7 08:47:06.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-393' Sep 7 08:47:06.180: INFO: stderr: "No resources found in kubectl-393 namespace.\n" Sep 7 08:47:06.180: INFO: stdout: "" Sep 7 08:47:06.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-393 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 7 08:47:06.275: INFO: stderr: "" Sep 7 08:47:06.275: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:06.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-393" for this suite. • [SLOW TEST:7.012 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":180,"skipped":2754,"failed":0} [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:06.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:06.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1414" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":181,"skipped":2754,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:06.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 7 08:47:11.941: INFO: Successfully updated pod "labelsupdate45209e72-40a0-4c8f-b5f8-0930fbd1f1c2" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:13.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9509" for this suite. • [SLOW TEST:7.279 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":2759,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:13.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:47:14.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:47:16.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:47:18.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065234, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:47:21.750: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:21.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-268" for this suite. STEP: Destroying namespace "webhook-268-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.951 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":183,"skipped":2769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:21.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:26.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9666" for this suite. • [SLOW TEST:5.013 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":184,"skipped":2807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:26.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 7 08:47:27.065: INFO: Waiting up to 5m0s for pod "var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54" in namespace "var-expansion-6801" to be "Succeeded or Failed" Sep 7 08:47:27.069: INFO: Pod "var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923572ms Sep 7 08:47:29.074: INFO: Pod "var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008390691s Sep 7 08:47:31.077: INFO: Pod "var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011665299s STEP: Saw pod success Sep 7 08:47:31.077: INFO: Pod "var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54" satisfied condition "Succeeded or Failed" Sep 7 08:47:31.079: INFO: Trying to get logs from node latest-worker2 pod var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54 container dapi-container: STEP: delete the pod Sep 7 08:47:31.095: INFO: Waiting for pod var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54 to disappear Sep 7 08:47:31.099: INFO: Pod var-expansion-1e5e87de-830c-407d-a251-b491d7fd3a54 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:31.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6801" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":185,"skipped":2846,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:31.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:47:31.206: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 7 08:47:34.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 create -f -' Sep 7 08:47:37.761: INFO: stderr: "" Sep 7 08:47:37.761: INFO: stdout: "e2e-test-crd-publish-openapi-7303-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 7 08:47:37.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 delete e2e-test-crd-publish-openapi-7303-crds test-cr' Sep 7 08:47:37.897: INFO: stderr: "" Sep 7 08:47:37.897: INFO: stdout: "e2e-test-crd-publish-openapi-7303-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 7 08:47:37.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 apply -f -' Sep 7 08:47:38.187: INFO: stderr: "" Sep 7 08:47:38.187: INFO: stdout: "e2e-test-crd-publish-openapi-7303-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 7 08:47:38.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5343 delete e2e-test-crd-publish-openapi-7303-crds test-cr' Sep 7 08:47:38.298: INFO: stderr: "" Sep 7 08:47:38.298: INFO: stdout: "e2e-test-crd-publish-openapi-7303-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 7 08:47:38.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7303-crds' Sep 7 08:47:38.620: INFO: stderr: "" Sep 7 08:47:38.620: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7303-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:41.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5343" for this suite. • [SLOW TEST:10.531 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":186,"skipped":2846,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:41.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 7 08:47:41.729: INFO: namespace kubectl-7340 Sep 7 08:47:41.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7340' Sep 7 08:47:42.063: INFO: stderr: "" Sep 7 08:47:42.063: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 7 08:47:43.100: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:47:43.100: INFO: Found 0 / 1 Sep 7 08:47:44.069: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:47:44.069: INFO: Found 0 / 1 Sep 7 08:47:45.418: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:47:45.418: INFO: Found 0 / 1 Sep 7 08:47:46.067: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:47:46.067: INFO: Found 0 / 1 Sep 7 08:47:47.208: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:47:47.208: INFO: Found 1 / 1 Sep 7 08:47:47.208: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 7 08:47:47.211: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 08:47:47.211: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 7 08:47:47.211: INFO: wait on agnhost-primary startup in kubectl-7340 Sep 7 08:47:47.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config logs agnhost-primary-zdxb2 agnhost-primary --namespace=kubectl-7340' Sep 7 08:47:47.338: INFO: stderr: "" Sep 7 08:47:47.338: INFO: stdout: "Paused\n" STEP: exposing RC Sep 7 08:47:47.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7340' Sep 7 08:47:47.506: INFO: stderr: "" Sep 7 08:47:47.506: INFO: stdout: "service/rm2 exposed\n" Sep 7 08:47:47.671: INFO: Service rm2 in namespace kubectl-7340 found. STEP: exposing service Sep 7 08:47:49.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7340' Sep 7 08:47:50.127: INFO: stderr: "" Sep 7 08:47:50.127: INFO: stdout: "service/rm3 exposed\n" Sep 7 08:47:50.170: INFO: Service rm3 in namespace kubectl-7340 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:52.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7340" for this suite. • [SLOW TEST:10.546 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":187,"skipped":2855,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:52.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 7 08:47:56.325: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:47:56.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3227" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":2861,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:47:56.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-8603 STEP: creating replication controller nodeport-test in namespace services-8603 I0907 08:47:56.691730 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8603, replica count: 2 I0907 08:47:59.742167 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:48:02.742422 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:48:02.742: INFO: Creating new exec pod Sep 7 08:48:07.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-8603 execpodwcfrv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 7 08:48:07.968: INFO: stderr: "I0907 08:48:07.894773 2498 log.go:181] (0xc0002973f0) (0xc000644320) Create stream\nI0907 08:48:07.894838 2498 log.go:181] (0xc0002973f0) (0xc000644320) Stream added, broadcasting: 1\nI0907 08:48:07.896693 2498 log.go:181] (0xc0002973f0) Reply frame received for 1\nI0907 08:48:07.896746 2498 log.go:181] (0xc0002973f0) (0xc000cdfb80) Create stream\nI0907 08:48:07.896760 2498 log.go:181] (0xc0002973f0) (0xc000cdfb80) Stream added, broadcasting: 3\nI0907 08:48:07.897612 2498 log.go:181] (0xc0002973f0) Reply frame received for 3\nI0907 08:48:07.897641 2498 log.go:181] (0xc0002973f0) (0xc00014ca00) Create stream\nI0907 08:48:07.897649 2498 log.go:181] (0xc0002973f0) (0xc00014ca00) Stream added, broadcasting: 5\nI0907 08:48:07.898427 2498 log.go:181] (0xc0002973f0) Reply frame received for 5\nI0907 08:48:07.961435 2498 log.go:181] (0xc0002973f0) Data frame received for 5\nI0907 08:48:07.961466 2498 log.go:181] (0xc00014ca00) (5) Data frame handling\nI0907 08:48:07.961498 2498 log.go:181] (0xc00014ca00) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0907 08:48:07.961814 2498 log.go:181] (0xc0002973f0) Data frame received for 5\nI0907 08:48:07.961839 2498 log.go:181] (0xc00014ca00) (5) Data frame handling\nI0907 08:48:07.961866 2498 log.go:181] (0xc00014ca00) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0907 08:48:07.962186 2498 log.go:181] (0xc0002973f0) Data frame received for 5\nI0907 08:48:07.962217 2498 log.go:181] (0xc00014ca00) (5) Data frame handling\nI0907 08:48:07.962282 2498 log.go:181] (0xc0002973f0) Data frame received for 3\nI0907 08:48:07.962320 2498 log.go:181] (0xc000cdfb80) (3) Data frame handling\nI0907 08:48:07.963643 2498 log.go:181] (0xc0002973f0) Data frame received for 1\nI0907 08:48:07.963746 2498 log.go:181] (0xc000644320) (1) Data frame handling\nI0907 08:48:07.963778 2498 log.go:181] (0xc000644320) (1) Data frame sent\nI0907 08:48:07.963808 2498 log.go:181] (0xc0002973f0) (0xc000644320) Stream removed, broadcasting: 1\nI0907 08:48:07.963836 2498 log.go:181] (0xc0002973f0) Go away received\nI0907 08:48:07.964418 2498 log.go:181] (0xc0002973f0) (0xc000644320) Stream removed, broadcasting: 1\nI0907 08:48:07.964444 2498 log.go:181] (0xc0002973f0) (0xc000cdfb80) Stream removed, broadcasting: 3\nI0907 08:48:07.964455 2498 log.go:181] (0xc0002973f0) (0xc00014ca00) Stream removed, broadcasting: 5\n" Sep 7 08:48:07.968: INFO: stdout: "" Sep 7 08:48:07.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-8603 execpodwcfrv -- /bin/sh -x -c nc -zv -t -w 2 10.105.134.196 80' Sep 7 08:48:08.188: INFO: stderr: "I0907 08:48:08.112555 2516 log.go:181] (0xc0005a2000) (0xc000d82000) Create stream\nI0907 08:48:08.112612 2516 log.go:181] (0xc0005a2000) (0xc000d82000) Stream added, broadcasting: 1\nI0907 08:48:08.114446 2516 log.go:181] (0xc0005a2000) Reply frame received for 1\nI0907 08:48:08.114482 2516 log.go:181] (0xc0005a2000) (0xc000adaf00) Create stream\nI0907 08:48:08.114497 2516 log.go:181] (0xc0005a2000) (0xc000adaf00) Stream added, broadcasting: 3\nI0907 08:48:08.115492 2516 log.go:181] (0xc0005a2000) Reply frame received for 3\nI0907 08:48:08.115538 2516 log.go:181] (0xc0005a2000) (0xc000d820a0) Create stream\nI0907 08:48:08.115554 2516 log.go:181] (0xc0005a2000) (0xc000d820a0) Stream added, broadcasting: 5\nI0907 08:48:08.116780 2516 log.go:181] (0xc0005a2000) Reply frame received for 5\nI0907 08:48:08.181520 2516 log.go:181] (0xc0005a2000) Data frame received for 3\nI0907 08:48:08.181545 2516 log.go:181] (0xc000adaf00) (3) Data frame handling\nI0907 08:48:08.181576 2516 log.go:181] (0xc0005a2000) Data frame received for 5\nI0907 08:48:08.181604 2516 log.go:181] (0xc000d820a0) (5) Data frame handling\nI0907 08:48:08.181625 2516 log.go:181] (0xc000d820a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.134.196 80\nConnection to 10.105.134.196 80 port [tcp/http] succeeded!\nI0907 08:48:08.181649 2516 log.go:181] (0xc0005a2000) Data frame received for 5\nI0907 08:48:08.181674 2516 log.go:181] (0xc000d820a0) (5) Data frame handling\nI0907 08:48:08.183371 2516 log.go:181] (0xc0005a2000) Data frame received for 1\nI0907 08:48:08.183409 2516 log.go:181] (0xc000d82000) (1) Data frame handling\nI0907 08:48:08.183456 2516 log.go:181] (0xc000d82000) (1) Data frame sent\nI0907 08:48:08.183503 2516 log.go:181] (0xc0005a2000) (0xc000d82000) Stream removed, broadcasting: 1\nI0907 08:48:08.183541 2516 log.go:181] (0xc0005a2000) Go away received\nI0907 08:48:08.183930 2516 log.go:181] (0xc0005a2000) (0xc000d82000) Stream removed, broadcasting: 1\nI0907 08:48:08.183953 2516 log.go:181] (0xc0005a2000) (0xc000adaf00) Stream removed, broadcasting: 3\nI0907 08:48:08.183966 2516 log.go:181] (0xc0005a2000) (0xc000d820a0) Stream removed, broadcasting: 5\n" Sep 7 08:48:08.189: INFO: stdout: "" Sep 7 08:48:08.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-8603 execpodwcfrv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31241' Sep 7 08:48:08.402: INFO: stderr: "I0907 08:48:08.326110 2535 log.go:181] (0xc000753760) (0xc0002f0b40) Create stream\nI0907 08:48:08.326195 2535 log.go:181] (0xc000753760) (0xc0002f0b40) Stream added, broadcasting: 1\nI0907 08:48:08.332570 2535 log.go:181] (0xc000753760) Reply frame received for 1\nI0907 08:48:08.332626 2535 log.go:181] (0xc000753760) (0xc0002f0000) Create stream\nI0907 08:48:08.332638 2535 log.go:181] (0xc000753760) (0xc0002f0000) Stream added, broadcasting: 3\nI0907 08:48:08.333607 2535 log.go:181] (0xc000753760) Reply frame received for 3\nI0907 08:48:08.333631 2535 log.go:181] (0xc000753760) (0xc0005a8000) Create stream\nI0907 08:48:08.333638 2535 log.go:181] (0xc000753760) (0xc0005a8000) Stream added, broadcasting: 5\nI0907 08:48:08.334389 2535 log.go:181] (0xc000753760) Reply frame received for 5\nI0907 08:48:08.397074 2535 log.go:181] (0xc000753760) Data frame received for 5\nI0907 08:48:08.397110 2535 log.go:181] (0xc0005a8000) (5) Data frame handling\nI0907 08:48:08.397128 2535 log.go:181] (0xc0005a8000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31241\nI0907 08:48:08.397470 2535 log.go:181] (0xc000753760) Data frame received for 5\nI0907 08:48:08.397497 2535 log.go:181] (0xc0005a8000) (5) Data frame handling\nI0907 08:48:08.397514 2535 log.go:181] (0xc0005a8000) (5) Data frame sent\nConnection to 172.18.0.15 31241 port [tcp/31241] succeeded!\nI0907 08:48:08.397571 2535 log.go:181] (0xc000753760) Data frame received for 3\nI0907 08:48:08.397589 2535 log.go:181] (0xc0002f0000) (3) Data frame handling\nI0907 08:48:08.397781 2535 log.go:181] (0xc000753760) Data frame received for 5\nI0907 08:48:08.397797 2535 log.go:181] (0xc0005a8000) (5) Data frame handling\nI0907 08:48:08.399060 2535 log.go:181] (0xc000753760) Data frame received for 1\nI0907 08:48:08.399122 2535 log.go:181] (0xc0002f0b40) (1) Data frame handling\nI0907 08:48:08.399167 2535 log.go:181] (0xc0002f0b40) (1) Data frame sent\nI0907 08:48:08.399204 2535 log.go:181] (0xc000753760) (0xc0002f0b40) Stream removed, broadcasting: 1\nI0907 08:48:08.399226 2535 log.go:181] (0xc000753760) Go away received\nI0907 08:48:08.399493 2535 log.go:181] (0xc000753760) (0xc0002f0b40) Stream removed, broadcasting: 1\nI0907 08:48:08.399504 2535 log.go:181] (0xc000753760) (0xc0002f0000) Stream removed, broadcasting: 3\nI0907 08:48:08.399511 2535 log.go:181] (0xc000753760) (0xc0005a8000) Stream removed, broadcasting: 5\n" Sep 7 08:48:08.402: INFO: stdout: "" Sep 7 08:48:08.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-8603 execpodwcfrv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31241' Sep 7 08:48:08.609: INFO: stderr: "I0907 08:48:08.530139 2553 log.go:181] (0xc00003a420) (0xc000c40000) Create stream\nI0907 08:48:08.530188 2553 log.go:181] (0xc00003a420) (0xc000c40000) Stream added, broadcasting: 1\nI0907 08:48:08.531860 2553 log.go:181] (0xc00003a420) Reply frame received for 1\nI0907 08:48:08.531924 2553 log.go:181] (0xc00003a420) (0xc000c98320) Create stream\nI0907 08:48:08.531950 2553 log.go:181] (0xc00003a420) (0xc000c98320) Stream added, broadcasting: 3\nI0907 08:48:08.532901 2553 log.go:181] (0xc00003a420) Reply frame received for 3\nI0907 08:48:08.532942 2553 log.go:181] (0xc00003a420) (0xc0009f4000) Create stream\nI0907 08:48:08.532954 2553 log.go:181] (0xc00003a420) (0xc0009f4000) Stream added, broadcasting: 5\nI0907 08:48:08.533650 2553 log.go:181] (0xc00003a420) Reply frame received for 5\nI0907 08:48:08.602665 2553 log.go:181] (0xc00003a420) Data frame received for 3\nI0907 08:48:08.602703 2553 log.go:181] (0xc000c98320) (3) Data frame handling\nI0907 08:48:08.602736 2553 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 08:48:08.602753 2553 log.go:181] (0xc0009f4000) (5) Data frame handling\nI0907 08:48:08.602769 2553 log.go:181] (0xc0009f4000) (5) Data frame sent\nI0907 08:48:08.602782 2553 log.go:181] (0xc00003a420) Data frame received for 5\nI0907 08:48:08.602794 2553 log.go:181] (0xc0009f4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31241\nConnection to 172.18.0.14 31241 port [tcp/31241] succeeded!\nI0907 08:48:08.604572 2553 log.go:181] (0xc00003a420) Data frame received for 1\nI0907 08:48:08.604623 2553 log.go:181] (0xc000c40000) (1) Data frame handling\nI0907 08:48:08.604642 2553 log.go:181] (0xc000c40000) (1) Data frame sent\nI0907 08:48:08.604666 2553 log.go:181] (0xc00003a420) (0xc000c40000) Stream removed, broadcasting: 1\nI0907 08:48:08.604701 2553 log.go:181] (0xc00003a420) Go away received\nI0907 08:48:08.605104 2553 log.go:181] (0xc00003a420) (0xc000c40000) Stream removed, broadcasting: 1\nI0907 08:48:08.605134 2553 log.go:181] (0xc00003a420) (0xc000c98320) Stream removed, broadcasting: 3\nI0907 08:48:08.605157 2553 log.go:181] (0xc00003a420) (0xc0009f4000) Stream removed, broadcasting: 5\n" Sep 7 08:48:08.609: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:48:08.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8603" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.088 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":189,"skipped":2883,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:48:08.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 7 08:48:08.670: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:48:24.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9091" for this suite. • [SLOW TEST:15.727 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":190,"skipped":2886,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:48:24.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 7 08:48:34.340: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:48:35.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3989" for this suite. • [SLOW TEST:11.372 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":2901,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:48:35.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3465 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3465;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3465 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3465;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3465.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3465.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3465.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3465.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3465.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3465.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3465.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 85.123.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.123.85_udp@PTR;check="$$(dig +tcp +noall +answer +search 85.123.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.123.85_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3465 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3465;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3465 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3465;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3465.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3465.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3465.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3465.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3465.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3465.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3465.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3465.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3465.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 85.123.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.123.85_udp@PTR;check="$$(dig +tcp +noall +answer +search 85.123.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.123.85_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:48:46.167: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.170: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.173: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.176: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.186: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.189: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.208: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.211: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.214: INFO: Unable to read jessie_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.217: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.220: INFO: Unable to read jessie_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.226: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:46.247: INFO: Lookups using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3465 wheezy_tcp@dns-test-service.dns-3465 wheezy_udp@dns-test-service.dns-3465.svc wheezy_tcp@dns-test-service.dns-3465.svc wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3465 jessie_tcp@dns-test-service.dns-3465 jessie_udp@dns-test-service.dns-3465.svc jessie_tcp@dns-test-service.dns-3465.svc jessie_udp@_http._tcp.dns-test-service.dns-3465.svc jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc] Sep 7 08:48:51.252: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.256: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.258: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.261: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.263: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.266: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.269: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.377: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.380: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.383: INFO: Unable to read jessie_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.386: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.389: INFO: Unable to read jessie_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.392: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.395: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.398: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:51.448: INFO: Lookups using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3465 wheezy_tcp@dns-test-service.dns-3465 wheezy_udp@dns-test-service.dns-3465.svc wheezy_tcp@dns-test-service.dns-3465.svc wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3465 jessie_tcp@dns-test-service.dns-3465 jessie_udp@dns-test-service.dns-3465.svc jessie_tcp@dns-test-service.dns-3465.svc jessie_udp@_http._tcp.dns-test-service.dns-3465.svc jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc] Sep 7 08:48:56.252: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.255: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.265: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.268: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.297: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.300: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.303: INFO: Unable to read jessie_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.306: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.309: INFO: Unable to read jessie_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.312: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.316: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:48:56.339: INFO: Lookups using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3465 wheezy_tcp@dns-test-service.dns-3465 wheezy_udp@dns-test-service.dns-3465.svc wheezy_tcp@dns-test-service.dns-3465.svc wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3465 jessie_tcp@dns-test-service.dns-3465 jessie_udp@dns-test-service.dns-3465.svc jessie_tcp@dns-test-service.dns-3465.svc jessie_udp@_http._tcp.dns-test-service.dns-3465.svc jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc] Sep 7 08:49:01.251: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.256: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.265: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.267: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.270: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.313: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.315: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.317: INFO: Unable to read jessie_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.322: INFO: Unable to read jessie_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.324: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.326: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.328: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:01.345: INFO: Lookups using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3465 wheezy_tcp@dns-test-service.dns-3465 wheezy_udp@dns-test-service.dns-3465.svc wheezy_tcp@dns-test-service.dns-3465.svc wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3465 jessie_tcp@dns-test-service.dns-3465 jessie_udp@dns-test-service.dns-3465.svc jessie_tcp@dns-test-service.dns-3465.svc jessie_udp@_http._tcp.dns-test-service.dns-3465.svc jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc] Sep 7 08:49:06.252: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.257: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.260: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.264: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.268: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.271: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.275: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.278: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.301: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.305: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.308: INFO: Unable to read jessie_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.311: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.314: INFO: Unable to read jessie_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.321: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.324: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:06.343: INFO: Lookups using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3465 wheezy_tcp@dns-test-service.dns-3465 wheezy_udp@dns-test-service.dns-3465.svc wheezy_tcp@dns-test-service.dns-3465.svc wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3465 jessie_tcp@dns-test-service.dns-3465 jessie_udp@dns-test-service.dns-3465.svc jessie_tcp@dns-test-service.dns-3465.svc jessie_udp@_http._tcp.dns-test-service.dns-3465.svc jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc] Sep 7 08:49:11.252: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.256: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.264: INFO: Unable to read wheezy_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.267: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.270: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.272: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.289: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.291: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.294: INFO: Unable to read jessie_udp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.296: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465 from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.298: INFO: Unable to read jessie_udp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.305: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc from pod dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb: the server could not find the requested resource (get pods dns-test-64fff636-f671-4bbc-936e-a21126bc90cb) Sep 7 08:49:11.322: INFO: Lookups using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3465 wheezy_tcp@dns-test-service.dns-3465 wheezy_udp@dns-test-service.dns-3465.svc wheezy_tcp@dns-test-service.dns-3465.svc wheezy_udp@_http._tcp.dns-test-service.dns-3465.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3465.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3465 jessie_tcp@dns-test-service.dns-3465 jessie_udp@dns-test-service.dns-3465.svc jessie_tcp@dns-test-service.dns-3465.svc jessie_udp@_http._tcp.dns-test-service.dns-3465.svc jessie_tcp@_http._tcp.dns-test-service.dns-3465.svc] Sep 7 08:49:16.346: INFO: DNS probes using dns-3465/dns-test-64fff636-f671-4bbc-936e-a21126bc90cb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:49:17.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3465" for this suite. • [SLOW TEST:41.376 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":192,"skipped":2903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:49:17.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 7 08:49:17.252: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1813 /api/v1/namespaces/watch-1813/configmaps/e2e-watch-test-label-changed d905d118-6d35-4596-b7b3-c251d692c7ec 290653 0 2020-09-07 08:49:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-07 08:49:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 08:49:17.252: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1813 /api/v1/namespaces/watch-1813/configmaps/e2e-watch-test-label-changed d905d118-6d35-4596-b7b3-c251d692c7ec 290654 0 2020-09-07 08:49:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-07 08:49:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 08:49:17.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1813 /api/v1/namespaces/watch-1813/configmaps/e2e-watch-test-label-changed d905d118-6d35-4596-b7b3-c251d692c7ec 290655 0 2020-09-07 08:49:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-07 08:49:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 7 08:49:27.298: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1813 /api/v1/namespaces/watch-1813/configmaps/e2e-watch-test-label-changed d905d118-6d35-4596-b7b3-c251d692c7ec 290706 0 2020-09-07 08:49:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-07 08:49:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 08:49:27.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1813 /api/v1/namespaces/watch-1813/configmaps/e2e-watch-test-label-changed d905d118-6d35-4596-b7b3-c251d692c7ec 290707 0 2020-09-07 08:49:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-07 08:49:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 08:49:27.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1813 /api/v1/namespaces/watch-1813/configmaps/e2e-watch-test-label-changed d905d118-6d35-4596-b7b3-c251d692c7ec 290708 0 2020-09-07 08:49:17 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-07 08:49:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:49:27.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1813" for this suite. • [SLOW TEST:10.210 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":193,"skipped":2948,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:49:27.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 7 08:49:27.398: INFO: Waiting up to 5m0s for pod "pod-7275fc58-2db1-4d22-a1a4-91118ded374e" in namespace "emptydir-3129" to be "Succeeded or Failed" Sep 7 08:49:27.414: INFO: Pod "pod-7275fc58-2db1-4d22-a1a4-91118ded374e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.321217ms Sep 7 08:49:29.418: INFO: Pod "pod-7275fc58-2db1-4d22-a1a4-91118ded374e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020535852s Sep 7 08:49:31.423: INFO: Pod "pod-7275fc58-2db1-4d22-a1a4-91118ded374e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024969926s STEP: Saw pod success Sep 7 08:49:31.423: INFO: Pod "pod-7275fc58-2db1-4d22-a1a4-91118ded374e" satisfied condition "Succeeded or Failed" Sep 7 08:49:31.426: INFO: Trying to get logs from node latest-worker pod pod-7275fc58-2db1-4d22-a1a4-91118ded374e container test-container: STEP: delete the pod Sep 7 08:49:31.531: INFO: Waiting for pod pod-7275fc58-2db1-4d22-a1a4-91118ded374e to disappear Sep 7 08:49:31.538: INFO: Pod pod-7275fc58-2db1-4d22-a1a4-91118ded374e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:49:31.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3129" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":194,"skipped":2955,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:49:31.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2223 STEP: creating service affinity-clusterip-transition in namespace services-2223 STEP: creating replication controller affinity-clusterip-transition in namespace services-2223 I0907 08:49:31.700225 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2223, replica count: 3 I0907 08:49:34.750687 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:49:37.750909 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:49:37.756: INFO: Creating new exec pod Sep 7 08:49:42.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2223 execpod-affinity6thqx -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 7 08:49:42.992: INFO: stderr: "I0907 08:49:42.910923 2571 log.go:181] (0xc00057f550) (0xc0004d26e0) Create stream\nI0907 08:49:42.910966 2571 log.go:181] (0xc00057f550) (0xc0004d26e0) Stream added, broadcasting: 1\nI0907 08:49:42.912473 2571 log.go:181] (0xc00057f550) Reply frame received for 1\nI0907 08:49:42.912501 2571 log.go:181] (0xc00057f550) (0xc0004d2780) Create stream\nI0907 08:49:42.912509 2571 log.go:181] (0xc00057f550) (0xc0004d2780) Stream added, broadcasting: 3\nI0907 08:49:42.913280 2571 log.go:181] (0xc00057f550) Reply frame received for 3\nI0907 08:49:42.913307 2571 log.go:181] (0xc00057f550) (0xc0005760a0) Create stream\nI0907 08:49:42.913316 2571 log.go:181] (0xc00057f550) (0xc0005760a0) Stream added, broadcasting: 5\nI0907 08:49:42.914141 2571 log.go:181] (0xc00057f550) Reply frame received for 5\nI0907 08:49:42.984924 2571 log.go:181] (0xc00057f550) Data frame received for 5\nI0907 08:49:42.984966 2571 log.go:181] (0xc0005760a0) (5) Data frame handling\nI0907 08:49:42.984991 2571 log.go:181] (0xc0005760a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0907 08:49:42.985913 2571 log.go:181] (0xc00057f550) Data frame received for 5\nI0907 08:49:42.986024 2571 log.go:181] (0xc0005760a0) (5) Data frame handling\nI0907 08:49:42.986078 2571 log.go:181] (0xc0005760a0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0907 08:49:42.986111 2571 log.go:181] (0xc00057f550) Data frame received for 3\nI0907 08:49:42.986139 2571 log.go:181] (0xc0004d2780) (3) Data frame handling\nI0907 08:49:42.986232 2571 log.go:181] (0xc00057f550) Data frame received for 5\nI0907 08:49:42.986252 2571 log.go:181] (0xc0005760a0) (5) Data frame handling\nI0907 08:49:42.988417 2571 log.go:181] (0xc00057f550) Data frame received for 1\nI0907 08:49:42.988442 2571 log.go:181] (0xc0004d26e0) (1) Data frame handling\nI0907 08:49:42.988458 2571 log.go:181] (0xc0004d26e0) (1) Data frame sent\nI0907 08:49:42.988482 2571 log.go:181] (0xc00057f550) (0xc0004d26e0) Stream removed, broadcasting: 1\nI0907 08:49:42.988525 2571 log.go:181] (0xc00057f550) Go away received\nI0907 08:49:42.988834 2571 log.go:181] (0xc00057f550) (0xc0004d26e0) Stream removed, broadcasting: 1\nI0907 08:49:42.988852 2571 log.go:181] (0xc00057f550) (0xc0004d2780) Stream removed, broadcasting: 3\nI0907 08:49:42.988863 2571 log.go:181] (0xc00057f550) (0xc0005760a0) Stream removed, broadcasting: 5\n" Sep 7 08:49:42.992: INFO: stdout: "" Sep 7 08:49:42.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2223 execpod-affinity6thqx -- /bin/sh -x -c nc -zv -t -w 2 10.102.41.132 80' Sep 7 08:49:43.190: INFO: stderr: "I0907 08:49:43.122337 2589 log.go:181] (0xc00074b1e0) (0xc000742780) Create stream\nI0907 08:49:43.122396 2589 log.go:181] (0xc00074b1e0) (0xc000742780) Stream added, broadcasting: 1\nI0907 08:49:43.127571 2589 log.go:181] (0xc00074b1e0) Reply frame received for 1\nI0907 08:49:43.127612 2589 log.go:181] (0xc00074b1e0) (0xc000742000) Create stream\nI0907 08:49:43.127625 2589 log.go:181] (0xc00074b1e0) (0xc000742000) Stream added, broadcasting: 3\nI0907 08:49:43.128987 2589 log.go:181] (0xc00074b1e0) Reply frame received for 3\nI0907 08:49:43.129024 2589 log.go:181] (0xc00074b1e0) (0xc0007fc000) Create stream\nI0907 08:49:43.129067 2589 log.go:181] (0xc00074b1e0) (0xc0007fc000) Stream added, broadcasting: 5\nI0907 08:49:43.130174 2589 log.go:181] (0xc00074b1e0) Reply frame received for 5\nI0907 08:49:43.183757 2589 log.go:181] (0xc00074b1e0) Data frame received for 3\nI0907 08:49:43.183826 2589 log.go:181] (0xc000742000) (3) Data frame handling\nI0907 08:49:43.183866 2589 log.go:181] (0xc00074b1e0) Data frame received for 5\nI0907 08:49:43.183881 2589 log.go:181] (0xc0007fc000) (5) Data frame handling\nI0907 08:49:43.183901 2589 log.go:181] (0xc0007fc000) (5) Data frame sent\n+ nc -zv -t -w 2 10.102.41.132 80\nConnection to 10.102.41.132 80 port [tcp/http] succeeded!\nI0907 08:49:43.183913 2589 log.go:181] (0xc00074b1e0) Data frame received for 5\nI0907 08:49:43.183966 2589 log.go:181] (0xc0007fc000) (5) Data frame handling\nI0907 08:49:43.185602 2589 log.go:181] (0xc00074b1e0) Data frame received for 1\nI0907 08:49:43.185631 2589 log.go:181] (0xc000742780) (1) Data frame handling\nI0907 08:49:43.185644 2589 log.go:181] (0xc000742780) (1) Data frame sent\nI0907 08:49:43.185660 2589 log.go:181] (0xc00074b1e0) (0xc000742780) Stream removed, broadcasting: 1\nI0907 08:49:43.185693 2589 log.go:181] (0xc00074b1e0) Go away received\nI0907 08:49:43.186037 2589 log.go:181] (0xc00074b1e0) (0xc000742780) Stream removed, broadcasting: 1\nI0907 08:49:43.186057 2589 log.go:181] (0xc00074b1e0) (0xc000742000) Stream removed, broadcasting: 3\nI0907 08:49:43.186068 2589 log.go:181] (0xc00074b1e0) (0xc0007fc000) Stream removed, broadcasting: 5\n" Sep 7 08:49:43.190: INFO: stdout: "" Sep 7 08:49:43.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2223 execpod-affinity6thqx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.41.132:80/ ; done' Sep 7 08:49:43.566: INFO: stderr: "I0907 08:49:43.381042 2607 log.go:181] (0xc0002a4160) (0xc0009d8320) Create stream\nI0907 08:49:43.381099 2607 log.go:181] (0xc0002a4160) (0xc0009d8320) Stream added, broadcasting: 1\nI0907 08:49:43.382889 2607 log.go:181] (0xc0002a4160) Reply frame received for 1\nI0907 08:49:43.382948 2607 log.go:181] (0xc0002a4160) (0xc000c757c0) Create stream\nI0907 08:49:43.382968 2607 log.go:181] (0xc0002a4160) (0xc000c757c0) Stream added, broadcasting: 3\nI0907 08:49:43.383999 2607 log.go:181] (0xc0002a4160) Reply frame received for 3\nI0907 08:49:43.384130 2607 log.go:181] (0xc0002a4160) (0xc000c75860) Create stream\nI0907 08:49:43.384152 2607 log.go:181] (0xc0002a4160) (0xc000c75860) Stream added, broadcasting: 5\nI0907 08:49:43.384995 2607 log.go:181] (0xc0002a4160) Reply frame received for 5\nI0907 08:49:43.456889 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.456948 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.456965 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.456991 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.457003 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.457028 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.463136 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.463170 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.463206 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.463918 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.463968 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.464069 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.464090 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.464098 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.464109 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.469160 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.469213 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.469253 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.469721 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.469748 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.469785 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.469808 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.469827 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.469840 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.474328 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.474353 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.474367 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.474736 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.474763 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.474774 2607 log.go:181] (0xc000c75860) (5) Data frame sent\nI0907 08:49:43.474783 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.474791 2607 log.go:181] (0xc000c75860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.474809 2607 log.go:181] (0xc000c75860) (5) Data frame sent\nI0907 08:49:43.474825 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.474833 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.474841 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.482121 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.482163 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.482197 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.482662 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.482691 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.482708 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.482716 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.482730 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.482737 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.490854 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.490875 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.490891 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.491560 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.491597 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.491615 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.491641 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.491673 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.491701 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.498970 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.498995 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.499020 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.499817 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.499839 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.499870 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.499891 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.499909 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.499918 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.506283 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.506306 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.506317 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.506882 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.506899 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.506912 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.506936 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.506960 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.506990 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.511238 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.511254 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.511263 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.511896 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.511939 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.511971 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.512081 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.512104 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.512117 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.518735 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.518760 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.518776 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.518997 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.519015 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.519032 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.519095 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.519122 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.519145 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.523697 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.523713 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.523728 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.524688 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.524730 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.524744 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.524759 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.524768 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.524776 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.530117 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.530166 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.530202 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.530582 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.530610 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.530622 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.530632 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.530639 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.530648 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.535440 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.535466 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.535508 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.536214 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.536226 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.536232 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.536271 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.536297 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.536315 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.541405 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.541421 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.541429 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.542192 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.542210 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.542220 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.542239 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.542250 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.542258 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.545395 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.545426 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.545454 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.546020 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.546057 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.546081 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.546110 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.546131 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.546158 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.552677 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.552702 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.552719 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.553440 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.553473 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.553486 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.553503 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.553513 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.553523 2607 log.go:181] (0xc000c75860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.559177 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.559197 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.559218 2607 log.go:181] (0xc000c757c0) (3) Data frame sent\nI0907 08:49:43.559904 2607 log.go:181] (0xc0002a4160) Data frame received for 5\nI0907 08:49:43.559922 2607 log.go:181] (0xc000c75860) (5) Data frame handling\nI0907 08:49:43.560254 2607 log.go:181] (0xc0002a4160) Data frame received for 3\nI0907 08:49:43.560296 2607 log.go:181] (0xc000c757c0) (3) Data frame handling\nI0907 08:49:43.561875 2607 log.go:181] (0xc0002a4160) Data frame received for 1\nI0907 08:49:43.561894 2607 log.go:181] (0xc0009d8320) (1) Data frame handling\nI0907 08:49:43.561913 2607 log.go:181] (0xc0009d8320) (1) Data frame sent\nI0907 08:49:43.561932 2607 log.go:181] (0xc0002a4160) (0xc0009d8320) Stream removed, broadcasting: 1\nI0907 08:49:43.561967 2607 log.go:181] (0xc0002a4160) Go away received\nI0907 08:49:43.562347 2607 log.go:181] (0xc0002a4160) (0xc0009d8320) Stream removed, broadcasting: 1\nI0907 08:49:43.562365 2607 log.go:181] (0xc0002a4160) (0xc000c757c0) Stream removed, broadcasting: 3\nI0907 08:49:43.562374 2607 log.go:181] (0xc0002a4160) (0xc000c75860) Stream removed, broadcasting: 5\n" Sep 7 08:49:43.567: INFO: stdout: "\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-4n85w\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-4n85w\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4n85w\naffinity-clusterip-transition-4n85w\naffinity-clusterip-transition-wvfzq\naffinity-clusterip-transition-4n85w" Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4n85w Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4n85w Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4n85w Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4n85w Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-wvfzq Sep 7 08:49:43.567: INFO: Received response from host: affinity-clusterip-transition-4n85w Sep 7 08:49:43.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-2223 execpod-affinity6thqx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.41.132:80/ ; done' Sep 7 08:49:44.023: INFO: stderr: "I0907 08:49:43.849100 2626 log.go:181] (0xc000c514a0) (0xc000c48a00) Create stream\nI0907 08:49:43.849169 2626 log.go:181] (0xc000c514a0) (0xc000c48a00) Stream added, broadcasting: 1\nI0907 08:49:43.859032 2626 log.go:181] (0xc000c514a0) Reply frame received for 1\nI0907 08:49:43.859083 2626 log.go:181] (0xc000c514a0) (0xc000c48000) Create stream\nI0907 08:49:43.859095 2626 log.go:181] (0xc000c514a0) (0xc000c48000) Stream added, broadcasting: 3\nI0907 08:49:43.860286 2626 log.go:181] (0xc000c514a0) Reply frame received for 3\nI0907 08:49:43.860348 2626 log.go:181] (0xc000c514a0) (0xc0009a0b40) Create stream\nI0907 08:49:43.860372 2626 log.go:181] (0xc000c514a0) (0xc0009a0b40) Stream added, broadcasting: 5\nI0907 08:49:43.861348 2626 log.go:181] (0xc000c514a0) Reply frame received for 5\nI0907 08:49:43.916605 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.916653 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.916686 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.916703 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.916733 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.916748 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.919432 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.919447 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.919458 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.920264 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.920318 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.920345 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.920380 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.920403 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.920434 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.926788 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.926814 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.926834 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.927604 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.927621 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.927631 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.927790 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.927816 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.927835 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.934257 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.934285 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.934330 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.934782 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.934809 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.934820 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.934832 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.934838 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.934845 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.941743 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.941765 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.941776 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.942038 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.942078 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.942102 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.942145 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.942168 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.942177 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.947203 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.947233 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.947268 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.947797 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.947820 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.947832 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.947845 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.947854 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.947862 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.951602 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.951626 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.951659 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.951954 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.951985 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.952084 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.952097 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.952117 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.952167 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.957438 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.957471 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.957499 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.958205 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.958229 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.958239 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.958254 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.958288 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.958319 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.963208 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.963230 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.963248 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.963639 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.963653 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.963660 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.963688 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.963708 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.963725 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.969005 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.969041 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.969069 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.969368 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.969384 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.969395 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.969410 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.969421 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.969431 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.976891 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.976907 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.976915 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.977501 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.977534 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.977553 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.977576 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.977590 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.977601 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.981295 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.981319 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.981332 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.981811 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.981844 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.981858 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.981886 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.981909 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.981937 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.988575 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.988593 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.988603 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.989253 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.989287 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.989300 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.989316 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.989327 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.989338 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.992835 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.992853 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.992864 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.993635 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.993661 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.993679 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.993706 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.993720 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.993811 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:43.999132 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.999170 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.999212 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.999797 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:43.999822 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:43.999835 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:43.999848 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:43.999860 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:43.999872 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:44.007209 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:44.007236 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:44.007255 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:44.008176 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:44.008192 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:44.008200 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:44.008234 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:44.008272 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:44.008300 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\nI0907 08:49:44.008314 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:44.008324 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.41.132:80/\nI0907 08:49:44.008352 2626 log.go:181] (0xc0009a0b40) (5) Data frame sent\nI0907 08:49:44.014888 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:44.014913 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:44.014932 2626 log.go:181] (0xc000c48000) (3) Data frame sent\nI0907 08:49:44.015527 2626 log.go:181] (0xc000c514a0) Data frame received for 5\nI0907 08:49:44.015542 2626 log.go:181] (0xc0009a0b40) (5) Data frame handling\nI0907 08:49:44.015568 2626 log.go:181] (0xc000c514a0) Data frame received for 3\nI0907 08:49:44.015589 2626 log.go:181] (0xc000c48000) (3) Data frame handling\nI0907 08:49:44.017868 2626 log.go:181] (0xc000c514a0) Data frame received for 1\nI0907 08:49:44.017883 2626 log.go:181] (0xc000c48a00) (1) Data frame handling\nI0907 08:49:44.017897 2626 log.go:181] (0xc000c48a00) (1) Data frame sent\nI0907 08:49:44.017913 2626 log.go:181] (0xc000c514a0) (0xc000c48a00) Stream removed, broadcasting: 1\nI0907 08:49:44.018061 2626 log.go:181] (0xc000c514a0) Go away received\nI0907 08:49:44.018393 2626 log.go:181] (0xc000c514a0) (0xc000c48a00) Stream removed, broadcasting: 1\nI0907 08:49:44.018414 2626 log.go:181] (0xc000c514a0) (0xc000c48000) Stream removed, broadcasting: 3\nI0907 08:49:44.018424 2626 log.go:181] (0xc000c514a0) (0xc0009a0b40) Stream removed, broadcasting: 5\n" Sep 7 08:49:44.024: INFO: stdout: "\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht\naffinity-clusterip-transition-4j4ht" Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Received response from host: affinity-clusterip-transition-4j4ht Sep 7 08:49:44.024: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2223, will wait for the garbage collector to delete the pods Sep 7 08:49:45.053: INFO: Deleting ReplicationController affinity-clusterip-transition took: 95.060124ms Sep 7 08:49:45.154: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.196081ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:50:01.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2223" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.454 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":195,"skipped":2976,"failed":0} SSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:50:02.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 7 08:50:02.144: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:50:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-723" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":196,"skipped":2985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:50:02.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5hsj2 in namespace proxy-70 I0907 08:50:02.370301 7 runners.go:190] Created replication controller with name: proxy-service-5hsj2, namespace: proxy-70, replica count: 1 I0907 08:50:03.420726 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:50:04.420984 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:50:05.421199 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:50:06.421480 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0907 08:50:07.421679 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0907 08:50:08.421931 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0907 08:50:09.422201 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0907 08:50:10.422421 7 runners.go:190] proxy-service-5hsj2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:50:10.426: INFO: setup took 8.178405339s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 7 08:50:10.457: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 30.90124ms) Sep 7 08:50:10.457: INFO: (0) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 30.949832ms) Sep 7 08:50:10.457: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testte... (200; 33.196428ms) Sep 7 08:50:10.460: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 33.753369ms) Sep 7 08:50:10.460: INFO: (0) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 33.762165ms) Sep 7 08:50:10.460: INFO: (0) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 33.89577ms) Sep 7 08:50:10.462: INFO: (0) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 35.983385ms) Sep 7 08:50:10.462: INFO: (0) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 36.094682ms) Sep 7 08:50:10.463: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 37.087334ms) Sep 7 08:50:10.465: INFO: (0) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 38.967274ms) Sep 7 08:50:10.465: INFO: (0) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 39.143802ms) Sep 7 08:50:10.465: INFO: (0) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 39.20353ms) Sep 7 08:50:10.465: INFO: (0) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 39.286042ms) Sep 7 08:50:10.466: INFO: (0) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testtest (200; 4.200636ms) Sep 7 08:50:10.470: INFO: (1) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 4.483614ms) Sep 7 08:50:10.470: INFO: (1) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 4.518195ms) Sep 7 08:50:10.470: INFO: (1) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.41679ms) Sep 7 08:50:10.471: INFO: (1) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 4.696274ms) Sep 7 08:50:10.471: INFO: (1) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 4.699332ms) Sep 7 08:50:10.471: INFO: (1) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 4.711956ms) Sep 7 08:50:10.471: INFO: (1) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 4.705824ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 3.83594ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.055615ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 4.041876ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 4.108163ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 4.243941ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 4.470901ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 4.52795ms) Sep 7 08:50:10.475: INFO: (2) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testte... (200; 5.480505ms) Sep 7 08:50:10.481: INFO: (3) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 5.498458ms) Sep 7 08:50:10.482: INFO: (3) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 6.305677ms) Sep 7 08:50:10.482: INFO: (3) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testtest (200; 6.302877ms) Sep 7 08:50:10.482: INFO: (3) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 6.394793ms) Sep 7 08:50:10.482: INFO: (3) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 6.370565ms) Sep 7 08:50:10.482: INFO: (3) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 6.389876ms) Sep 7 08:50:10.483: INFO: (3) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 6.903926ms) Sep 7 08:50:10.483: INFO: (3) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 6.908707ms) Sep 7 08:50:10.483: INFO: (3) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 7.316708ms) Sep 7 08:50:10.483: INFO: (3) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 7.380813ms) Sep 7 08:50:10.483: INFO: (3) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 7.496308ms) Sep 7 08:50:10.483: INFO: (3) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 7.648115ms) Sep 7 08:50:10.486: INFO: (4) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 2.832692ms) Sep 7 08:50:10.486: INFO: (4) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 2.909676ms) Sep 7 08:50:10.486: INFO: (4) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 2.791104ms) Sep 7 08:50:10.486: INFO: (4) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 2.950773ms) Sep 7 08:50:10.487: INFO: (4) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 3.203819ms) Sep 7 08:50:10.487: INFO: (4) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 3.580643ms) Sep 7 08:50:10.487: INFO: (4) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 3.828973ms) Sep 7 08:50:10.487: INFO: (4) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testtestte... (200; 4.839308ms) Sep 7 08:50:10.494: INFO: (5) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 5.085032ms) Sep 7 08:50:10.494: INFO: (5) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 5.115789ms) Sep 7 08:50:10.494: INFO: (5) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 5.172029ms) Sep 7 08:50:10.494: INFO: (5) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 5.182837ms) Sep 7 08:50:10.494: INFO: (5) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 5.238132ms) Sep 7 08:50:10.495: INFO: (5) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 6.214771ms) Sep 7 08:50:10.496: INFO: (5) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 6.511425ms) Sep 7 08:50:10.496: INFO: (5) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 6.46177ms) Sep 7 08:50:10.496: INFO: (5) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 6.524789ms) Sep 7 08:50:10.496: INFO: (5) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 6.529418ms) Sep 7 08:50:10.496: INFO: (5) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 6.512293ms) Sep 7 08:50:10.498: INFO: (6) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testte... (200; 4.487682ms) Sep 7 08:50:10.500: INFO: (6) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 4.630904ms) Sep 7 08:50:10.500: INFO: (6) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.553026ms) Sep 7 08:50:10.500: INFO: (6) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 4.577408ms) Sep 7 08:50:10.504: INFO: (7) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 3.490516ms) Sep 7 08:50:10.504: INFO: (7) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 3.584209ms) Sep 7 08:50:10.504: INFO: (7) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 3.729757ms) Sep 7 08:50:10.504: INFO: (7) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 3.686787ms) Sep 7 08:50:10.505: INFO: (7) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 4.007745ms) Sep 7 08:50:10.505: INFO: (7) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testtest (200; 4.131077ms) Sep 7 08:50:10.505: INFO: (7) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testtest (200; 5.207799ms) Sep 7 08:50:10.511: INFO: (8) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 5.215031ms) Sep 7 08:50:10.511: INFO: (8) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 5.260131ms) Sep 7 08:50:10.511: INFO: (8) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testtest (200; 4.707837ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 5.451935ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 5.756733ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 5.521585ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 6.028299ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 5.943891ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 6.230942ms) Sep 7 08:50:10.517: INFO: (9) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 5.492801ms) Sep 7 08:50:10.520: INFO: (10) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 2.73126ms) Sep 7 08:50:10.520: INFO: (10) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 2.850933ms) Sep 7 08:50:10.521: INFO: (10) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 3.142055ms) Sep 7 08:50:10.521: INFO: (10) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testte... (200; 4.311716ms) Sep 7 08:50:10.522: INFO: (10) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.379563ms) Sep 7 08:50:10.522: INFO: (10) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 4.635243ms) Sep 7 08:50:10.522: INFO: (10) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.622995ms) Sep 7 08:50:10.524: INFO: (11) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 2.191896ms) Sep 7 08:50:10.525: INFO: (11) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 2.562663ms) Sep 7 08:50:10.525: INFO: (11) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 3.020955ms) Sep 7 08:50:10.526: INFO: (11) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: test (200; 4.323646ms) Sep 7 08:50:10.527: INFO: (11) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 4.312887ms) Sep 7 08:50:10.527: INFO: (11) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testte... (200; 4.29657ms) Sep 7 08:50:10.527: INFO: (11) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 4.42588ms) Sep 7 08:50:10.528: INFO: (11) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 5.267438ms) Sep 7 08:50:10.528: INFO: (11) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 5.398473ms) Sep 7 08:50:10.528: INFO: (11) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 5.575067ms) Sep 7 08:50:10.528: INFO: (11) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 5.527679ms) Sep 7 08:50:10.528: INFO: (11) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 5.660015ms) Sep 7 08:50:10.530: INFO: (12) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 2.233315ms) Sep 7 08:50:10.531: INFO: (12) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: te... (200; 3.064101ms) Sep 7 08:50:10.531: INFO: (12) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 3.020616ms) Sep 7 08:50:10.532: INFO: (12) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 3.827865ms) Sep 7 08:50:10.532: INFO: (12) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 3.855197ms) Sep 7 08:50:10.532: INFO: (12) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 3.930405ms) Sep 7 08:50:10.532: INFO: (12) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 4.008141ms) Sep 7 08:50:10.532: INFO: (12) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 4.166796ms) Sep 7 08:50:10.533: INFO: (12) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 4.522301ms) Sep 7 08:50:10.533: INFO: (12) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testtest (200; 2.651663ms) Sep 7 08:50:10.536: INFO: (13) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 2.896339ms) Sep 7 08:50:10.536: INFO: (13) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testtest (200; 3.95635ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 3.932753ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 4.033653ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 3.954558ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 4.135742ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.083128ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: te... (200; 4.251425ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 4.317679ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.29582ms) Sep 7 08:50:10.541: INFO: (14) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testtest (200; 2.231495ms) Sep 7 08:50:10.544: INFO: (15) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 2.570398ms) Sep 7 08:50:10.544: INFO: (15) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 2.712057ms) Sep 7 08:50:10.544: INFO: (15) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 2.817339ms) Sep 7 08:50:10.544: INFO: (15) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 2.609675ms) Sep 7 08:50:10.548: INFO: (15) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 6.653957ms) Sep 7 08:50:10.549: INFO: (15) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 6.688715ms) Sep 7 08:50:10.549: INFO: (15) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 6.489423ms) Sep 7 08:50:10.549: INFO: (15) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 6.633054ms) Sep 7 08:50:10.549: INFO: (15) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 7.054079ms) Sep 7 08:50:10.549: INFO: (15) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testtesttest (200; 4.549497ms) Sep 7 08:50:10.554: INFO: (16) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 4.619447ms) Sep 7 08:50:10.554: INFO: (16) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 4.569442ms) Sep 7 08:50:10.554: INFO: (16) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 4.603681ms) Sep 7 08:50:10.554: INFO: (16) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 4.863357ms) Sep 7 08:50:10.557: INFO: (17) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 2.840086ms) Sep 7 08:50:10.557: INFO: (17) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testte... (200; 3.806743ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 3.79237ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 3.931117ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 3.881014ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 4.002534ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 3.951565ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 3.963477ms) Sep 7 08:50:10.558: INFO: (17) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 4.009194ms) Sep 7 08:50:10.561: INFO: (18) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 2.732554ms) Sep 7 08:50:10.561: INFO: (18) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 2.920692ms) Sep 7 08:50:10.561: INFO: (18) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:1080/proxy/: te... (200; 2.774565ms) Sep 7 08:50:10.561: INFO: (18) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:462/proxy/: tls qux (200; 2.657056ms) Sep 7 08:50:10.561: INFO: (18) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: testtest (200; 3.710792ms) Sep 7 08:50:10.562: INFO: (18) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 4.148437ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 3.581754ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.529234ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 4.407181ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 3.924413ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 3.936663ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname2/proxy/: bar (200; 4.252217ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname1/proxy/: tls baz (200; 4.185506ms) Sep 7 08:50:10.563: INFO: (18) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 4.23215ms) Sep 7 08:50:10.566: INFO: (19) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:160/proxy/: foo (200; 2.399479ms) Sep 7 08:50:10.566: INFO: (19) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm:1080/proxy/: testte... (200; 3.408318ms) Sep 7 08:50:10.567: INFO: (19) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:460/proxy/: tls baz (200; 4.233497ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/pods/proxy-service-5hsj2-25zcm/proxy/: test (200; 4.479683ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/services/proxy-service-5hsj2:portname1/proxy/: foo (200; 4.735327ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/pods/http:proxy-service-5hsj2-25zcm:162/proxy/: bar (200; 4.682566ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname2/proxy/: bar (200; 4.962259ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/services/https:proxy-service-5hsj2:tlsportname2/proxy/: tls qux (200; 4.930445ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/services/http:proxy-service-5hsj2:portname1/proxy/: foo (200; 4.993508ms) Sep 7 08:50:10.568: INFO: (19) /api/v1/namespaces/proxy-70/pods/https:proxy-service-5hsj2-25zcm:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2623.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2623.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2623.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:50:28.110: INFO: DNS probes using dns-test-4a9dc070-24d6-4b2e-acbc-d27599406da3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2623.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2623.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2623.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:50:34.272: INFO: File wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local from pod dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 7 08:50:34.275: INFO: Lookups using dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 failed for: [wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local] Sep 7 08:50:39.280: INFO: File wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local from pod dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 7 08:50:39.285: INFO: Lookups using dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 failed for: [wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local] Sep 7 08:50:44.280: INFO: File wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local from pod dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 7 08:50:44.284: INFO: Lookups using dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 failed for: [wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local] Sep 7 08:50:49.280: INFO: File wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local from pod dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 contains '' instead of 'bar.example.com.' Sep 7 08:50:49.283: INFO: Lookups using dns-2623/dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 failed for: [wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local] Sep 7 08:50:54.284: INFO: DNS probes using dns-test-2dcb9d03-3952-4263-abec-abdbe353ab20 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2623.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2623.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2623.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2623.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:51:00.785: INFO: DNS probes using dns-test-4fa46900-cd3c-4569-86c4-0e6d520e238d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:51:00.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2623" for this suite. • [SLOW TEST:39.141 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":198,"skipped":3036,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:51:01.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:51:02.295: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:51:04.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065462, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065462, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065462, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065461, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:51:06.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065462, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065462, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065462, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065461, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:51:09.390: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:51:09.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2388-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:51:10.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6969" for this suite. STEP: Destroying namespace "webhook-6969-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.529 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":199,"skipped":3041,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:51:10.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:51:10.727: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-220 I0907 08:51:10.747925 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-220, replica count: 1 I0907 08:51:11.798275 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:51:12.798491 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:51:13.798725 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:51:14.798925 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 08:51:14.939: INFO: Created: latency-svc-ndsjv Sep 7 08:51:14.955: INFO: Got endpoints: latency-svc-ndsjv [56.807221ms] Sep 7 08:51:14.986: INFO: Created: latency-svc-2r762 Sep 7 08:51:14.995: INFO: Got endpoints: latency-svc-2r762 [39.698128ms] Sep 7 08:51:15.062: INFO: Created: latency-svc-vhdrd Sep 7 08:51:15.099: INFO: Got endpoints: latency-svc-vhdrd [143.413265ms] Sep 7 08:51:15.155: INFO: Created: latency-svc-ggs76 Sep 7 08:51:15.215: INFO: Got endpoints: latency-svc-ggs76 [259.032971ms] Sep 7 08:51:15.251: INFO: Created: latency-svc-wh2st Sep 7 08:51:15.260: INFO: Got endpoints: latency-svc-wh2st [303.718215ms] Sep 7 08:51:15.293: INFO: Created: latency-svc-78zxz Sep 7 08:51:15.361: INFO: Got endpoints: latency-svc-78zxz [404.654223ms] Sep 7 08:51:15.381: INFO: Created: latency-svc-58bmn Sep 7 08:51:15.398: INFO: Got endpoints: latency-svc-58bmn [441.96916ms] Sep 7 08:51:15.454: INFO: Created: latency-svc-hbkcj Sep 7 08:51:15.486: INFO: Got endpoints: latency-svc-hbkcj [530.217453ms] Sep 7 08:51:15.497: INFO: Created: latency-svc-fkgkt Sep 7 08:51:15.513: INFO: Got endpoints: latency-svc-fkgkt [556.916196ms] Sep 7 08:51:15.533: INFO: Created: latency-svc-zptpl Sep 7 08:51:15.561: INFO: Got endpoints: latency-svc-zptpl [605.1896ms] Sep 7 08:51:15.649: INFO: Created: latency-svc-xthxh Sep 7 08:51:15.663: INFO: Got endpoints: latency-svc-xthxh [706.811695ms] Sep 7 08:51:15.687: INFO: Created: latency-svc-dw576 Sep 7 08:51:15.705: INFO: Got endpoints: latency-svc-dw576 [749.460125ms] Sep 7 08:51:15.824: INFO: Created: latency-svc-8lbv4 Sep 7 08:51:15.827: INFO: Got endpoints: latency-svc-8lbv4 [870.926311ms] Sep 7 08:51:15.868: INFO: Created: latency-svc-cnpvs Sep 7 08:51:15.885: INFO: Got endpoints: latency-svc-cnpvs [929.176132ms] Sep 7 08:51:15.989: INFO: Created: latency-svc-fgftt Sep 7 08:51:16.000: INFO: Got endpoints: latency-svc-fgftt [1.044457617s] Sep 7 08:51:16.073: INFO: Created: latency-svc-gwqn7 Sep 7 08:51:16.157: INFO: Got endpoints: latency-svc-gwqn7 [1.200691857s] Sep 7 08:51:16.191: INFO: Created: latency-svc-thwjg Sep 7 08:51:16.216: INFO: Got endpoints: latency-svc-thwjg [1.220520905s] Sep 7 08:51:16.301: INFO: Created: latency-svc-5bpcn Sep 7 08:51:16.312: INFO: Got endpoints: latency-svc-5bpcn [1.212506897s] Sep 7 08:51:16.365: INFO: Created: latency-svc-fj85w Sep 7 08:51:16.379: INFO: Got endpoints: latency-svc-fj85w [1.16384537s] Sep 7 08:51:16.457: INFO: Created: latency-svc-sp4kv Sep 7 08:51:16.461: INFO: Got endpoints: latency-svc-sp4kv [1.201236729s] Sep 7 08:51:16.486: INFO: Created: latency-svc-zxrvt Sep 7 08:51:16.505: INFO: Got endpoints: latency-svc-zxrvt [1.144213265s] Sep 7 08:51:16.600: INFO: Created: latency-svc-9bqdb Sep 7 08:51:16.641: INFO: Got endpoints: latency-svc-9bqdb [1.243443075s] Sep 7 08:51:16.757: INFO: Created: latency-svc-l5zkt Sep 7 08:51:16.760: INFO: Got endpoints: latency-svc-l5zkt [1.273485907s] Sep 7 08:51:16.791: INFO: Created: latency-svc-f4sr6 Sep 7 08:51:16.811: INFO: Got endpoints: latency-svc-f4sr6 [1.298071484s] Sep 7 08:51:16.840: INFO: Created: latency-svc-lvg49 Sep 7 08:51:16.854: INFO: Got endpoints: latency-svc-lvg49 [1.292253423s] Sep 7 08:51:16.934: INFO: Created: latency-svc-sxvpz Sep 7 08:51:16.950: INFO: Got endpoints: latency-svc-sxvpz [1.286654023s] Sep 7 08:51:16.976: INFO: Created: latency-svc-bh72f Sep 7 08:51:16.991: INFO: Got endpoints: latency-svc-bh72f [1.285774609s] Sep 7 08:51:17.076: INFO: Created: latency-svc-65xxb Sep 7 08:51:17.088: INFO: Got endpoints: latency-svc-65xxb [1.260947635s] Sep 7 08:51:17.110: INFO: Created: latency-svc-xdnql Sep 7 08:51:17.134: INFO: Got endpoints: latency-svc-xdnql [1.249037163s] Sep 7 08:51:17.168: INFO: Created: latency-svc-6d5hs Sep 7 08:51:17.247: INFO: Got endpoints: latency-svc-6d5hs [1.246669686s] Sep 7 08:51:17.307: INFO: Created: latency-svc-pqfng Sep 7 08:51:17.322: INFO: Got endpoints: latency-svc-pqfng [1.165645971s] Sep 7 08:51:17.403: INFO: Created: latency-svc-dq2ms Sep 7 08:51:17.422: INFO: Got endpoints: latency-svc-dq2ms [1.205858051s] Sep 7 08:51:17.468: INFO: Created: latency-svc-psrp9 Sep 7 08:51:17.485: INFO: Got endpoints: latency-svc-psrp9 [1.17309299s] Sep 7 08:51:17.578: INFO: Created: latency-svc-mvpdq Sep 7 08:51:17.620: INFO: Got endpoints: latency-svc-mvpdq [1.241190432s] Sep 7 08:51:17.650: INFO: Created: latency-svc-4qgfm Sep 7 08:51:17.659: INFO: Got endpoints: latency-svc-4qgfm [1.197715013s] Sep 7 08:51:17.729: INFO: Created: latency-svc-9v6fk Sep 7 08:51:17.744: INFO: Got endpoints: latency-svc-9v6fk [1.23906381s] Sep 7 08:51:17.792: INFO: Created: latency-svc-rpzvm Sep 7 08:51:17.863: INFO: Got endpoints: latency-svc-rpzvm [1.221788174s] Sep 7 08:51:17.902: INFO: Created: latency-svc-t5dw9 Sep 7 08:51:17.919: INFO: Got endpoints: latency-svc-t5dw9 [1.159622651s] Sep 7 08:51:17.960: INFO: Created: latency-svc-gqwtd Sep 7 08:51:18.050: INFO: Got endpoints: latency-svc-gqwtd [1.239399178s] Sep 7 08:51:18.118: INFO: Created: latency-svc-psmff Sep 7 08:51:18.181: INFO: Got endpoints: latency-svc-psmff [1.327052224s] Sep 7 08:51:18.256: INFO: Created: latency-svc-9xbdg Sep 7 08:51:18.313: INFO: Got endpoints: latency-svc-9xbdg [1.362885039s] Sep 7 08:51:18.350: INFO: Created: latency-svc-686rd Sep 7 08:51:18.358: INFO: Got endpoints: latency-svc-686rd [1.366743247s] Sep 7 08:51:18.411: INFO: Created: latency-svc-zs9gr Sep 7 08:51:18.463: INFO: Got endpoints: latency-svc-zs9gr [1.374552706s] Sep 7 08:51:18.496: INFO: Created: latency-svc-zwqbd Sep 7 08:51:18.509: INFO: Got endpoints: latency-svc-zwqbd [1.374453904s] Sep 7 08:51:18.537: INFO: Created: latency-svc-nf48q Sep 7 08:51:18.618: INFO: Got endpoints: latency-svc-nf48q [1.37112928s] Sep 7 08:51:18.700: INFO: Created: latency-svc-gbftn Sep 7 08:51:18.713: INFO: Got endpoints: latency-svc-gbftn [1.390659155s] Sep 7 08:51:18.806: INFO: Created: latency-svc-8r4ph Sep 7 08:51:18.833: INFO: Got endpoints: latency-svc-8r4ph [1.410936616s] Sep 7 08:51:18.938: INFO: Created: latency-svc-r9ztn Sep 7 08:51:18.956: INFO: Got endpoints: latency-svc-r9ztn [1.471521982s] Sep 7 08:51:18.992: INFO: Created: latency-svc-s6hcc Sep 7 08:51:19.003: INFO: Got endpoints: latency-svc-s6hcc [1.382583762s] Sep 7 08:51:19.062: INFO: Created: latency-svc-tqrgv Sep 7 08:51:19.067: INFO: Got endpoints: latency-svc-tqrgv [1.407751658s] Sep 7 08:51:19.095: INFO: Created: latency-svc-jkqnb Sep 7 08:51:19.110: INFO: Got endpoints: latency-svc-jkqnb [1.366264643s] Sep 7 08:51:19.223: INFO: Created: latency-svc-6kh4t Sep 7 08:51:19.227: INFO: Got endpoints: latency-svc-6kh4t [1.364076013s] Sep 7 08:51:19.268: INFO: Created: latency-svc-8xzsj Sep 7 08:51:19.279: INFO: Got endpoints: latency-svc-8xzsj [1.359302733s] Sep 7 08:51:19.310: INFO: Created: latency-svc-cjwk7 Sep 7 08:51:19.361: INFO: Got endpoints: latency-svc-cjwk7 [1.310946937s] Sep 7 08:51:19.401: INFO: Created: latency-svc-9chgj Sep 7 08:51:19.417: INFO: Got endpoints: latency-svc-9chgj [1.236422367s] Sep 7 08:51:19.499: INFO: Created: latency-svc-dc4xp Sep 7 08:51:19.501: INFO: Got endpoints: latency-svc-dc4xp [1.188424429s] Sep 7 08:51:19.531: INFO: Created: latency-svc-ghrfk Sep 7 08:51:19.568: INFO: Got endpoints: latency-svc-ghrfk [1.209502816s] Sep 7 08:51:19.654: INFO: Created: latency-svc-8k649 Sep 7 08:51:19.657: INFO: Got endpoints: latency-svc-8k649 [1.19469351s] Sep 7 08:51:19.689: INFO: Created: latency-svc-g9lg4 Sep 7 08:51:19.706: INFO: Got endpoints: latency-svc-g9lg4 [1.197329611s] Sep 7 08:51:19.725: INFO: Created: latency-svc-2fc24 Sep 7 08:51:19.742: INFO: Got endpoints: latency-svc-2fc24 [1.123550196s] Sep 7 08:51:19.810: INFO: Created: latency-svc-4s77l Sep 7 08:51:19.813: INFO: Got endpoints: latency-svc-4s77l [1.099889781s] Sep 7 08:51:19.875: INFO: Created: latency-svc-bf8vb Sep 7 08:51:19.904: INFO: Got endpoints: latency-svc-bf8vb [1.071345794s] Sep 7 08:51:19.986: INFO: Created: latency-svc-qqwbx Sep 7 08:51:20.007: INFO: Got endpoints: latency-svc-qqwbx [1.050035375s] Sep 7 08:51:20.037: INFO: Created: latency-svc-2g6rv Sep 7 08:51:20.073: INFO: Got endpoints: latency-svc-2g6rv [1.070280039s] Sep 7 08:51:20.145: INFO: Created: latency-svc-zpkg4 Sep 7 08:51:20.179: INFO: Got endpoints: latency-svc-zpkg4 [1.11269774s] Sep 7 08:51:20.240: INFO: Created: latency-svc-k4gfg Sep 7 08:51:20.288: INFO: Got endpoints: latency-svc-k4gfg [1.178122307s] Sep 7 08:51:20.305: INFO: Created: latency-svc-nc8z9 Sep 7 08:51:20.324: INFO: Got endpoints: latency-svc-nc8z9 [1.096517914s] Sep 7 08:51:20.343: INFO: Created: latency-svc-s8vrm Sep 7 08:51:20.361: INFO: Got endpoints: latency-svc-s8vrm [1.081978894s] Sep 7 08:51:20.385: INFO: Created: latency-svc-5t2fx Sep 7 08:51:20.445: INFO: Got endpoints: latency-svc-5t2fx [1.083094933s] Sep 7 08:51:20.447: INFO: Created: latency-svc-x6hcs Sep 7 08:51:20.457: INFO: Got endpoints: latency-svc-x6hcs [1.039342166s] Sep 7 08:51:20.479: INFO: Created: latency-svc-jshb7 Sep 7 08:51:20.493: INFO: Got endpoints: latency-svc-jshb7 [991.791815ms] Sep 7 08:51:20.515: INFO: Created: latency-svc-44hqb Sep 7 08:51:20.529: INFO: Got endpoints: latency-svc-44hqb [961.455998ms] Sep 7 08:51:20.642: INFO: Created: latency-svc-k66v7 Sep 7 08:51:20.655: INFO: Got endpoints: latency-svc-k66v7 [997.278612ms] Sep 7 08:51:20.708: INFO: Created: latency-svc-446fd Sep 7 08:51:20.722: INFO: Got endpoints: latency-svc-446fd [1.01575351s] Sep 7 08:51:20.828: INFO: Created: latency-svc-m2zv7 Sep 7 08:51:20.831: INFO: Got endpoints: latency-svc-m2zv7 [1.089322592s] Sep 7 08:51:20.865: INFO: Created: latency-svc-97rf9 Sep 7 08:51:20.878: INFO: Got endpoints: latency-svc-97rf9 [1.064478988s] Sep 7 08:51:20.901: INFO: Created: latency-svc-wfdcp Sep 7 08:51:20.914: INFO: Got endpoints: latency-svc-wfdcp [1.009248408s] Sep 7 08:51:20.971: INFO: Created: latency-svc-vsb8f Sep 7 08:51:20.980: INFO: Got endpoints: latency-svc-vsb8f [973.801839ms] Sep 7 08:51:21.012: INFO: Created: latency-svc-vfvbr Sep 7 08:51:21.035: INFO: Got endpoints: latency-svc-vfvbr [961.438978ms] Sep 7 08:51:21.055: INFO: Created: latency-svc-tdjkr Sep 7 08:51:21.121: INFO: Got endpoints: latency-svc-tdjkr [941.846545ms] Sep 7 08:51:21.145: INFO: Created: latency-svc-p5jsx Sep 7 08:51:21.161: INFO: Got endpoints: latency-svc-p5jsx [872.398986ms] Sep 7 08:51:21.183: INFO: Created: latency-svc-fpgtf Sep 7 08:51:21.200: INFO: Got endpoints: latency-svc-fpgtf [875.544545ms] Sep 7 08:51:21.253: INFO: Created: latency-svc-9t2t5 Sep 7 08:51:21.289: INFO: Got endpoints: latency-svc-9t2t5 [928.251537ms] Sep 7 08:51:21.331: INFO: Created: latency-svc-lfr52 Sep 7 08:51:21.348: INFO: Got endpoints: latency-svc-lfr52 [903.109658ms] Sep 7 08:51:21.427: INFO: Created: latency-svc-bfvgr Sep 7 08:51:21.434: INFO: Got endpoints: latency-svc-bfvgr [977.041719ms] Sep 7 08:51:21.471: INFO: Created: latency-svc-fxdcl Sep 7 08:51:21.480: INFO: Got endpoints: latency-svc-fxdcl [987.306371ms] Sep 7 08:51:21.512: INFO: Created: latency-svc-fr2p2 Sep 7 08:51:21.630: INFO: Got endpoints: latency-svc-fr2p2 [1.101244567s] Sep 7 08:51:21.633: INFO: Created: latency-svc-6r9mg Sep 7 08:51:21.654: INFO: Got endpoints: latency-svc-6r9mg [999.370089ms] Sep 7 08:51:21.680: INFO: Created: latency-svc-wffbl Sep 7 08:51:21.709: INFO: Got endpoints: latency-svc-wffbl [986.816038ms] Sep 7 08:51:21.792: INFO: Created: latency-svc-w2jqk Sep 7 08:51:21.796: INFO: Got endpoints: latency-svc-w2jqk [964.998162ms] Sep 7 08:51:21.878: INFO: Created: latency-svc-llzrp Sep 7 08:51:21.959: INFO: Got endpoints: latency-svc-llzrp [1.081552445s] Sep 7 08:51:21.973: INFO: Created: latency-svc-sbr6m Sep 7 08:51:21.981: INFO: Got endpoints: latency-svc-sbr6m [1.066833681s] Sep 7 08:51:22.057: INFO: Created: latency-svc-q9dvp Sep 7 08:51:22.145: INFO: Got endpoints: latency-svc-q9dvp [1.164601459s] Sep 7 08:51:22.148: INFO: Created: latency-svc-ml7t6 Sep 7 08:51:22.173: INFO: Got endpoints: latency-svc-ml7t6 [1.138475312s] Sep 7 08:51:22.221: INFO: Created: latency-svc-n2nz4 Sep 7 08:51:22.354: INFO: Got endpoints: latency-svc-n2nz4 [1.232955081s] Sep 7 08:51:22.377: INFO: Created: latency-svc-hcb8w Sep 7 08:51:22.408: INFO: Got endpoints: latency-svc-hcb8w [1.246923854s] Sep 7 08:51:22.550: INFO: Created: latency-svc-bh9pv Sep 7 08:51:22.589: INFO: Got endpoints: latency-svc-bh9pv [1.389243125s] Sep 7 08:51:22.700: INFO: Created: latency-svc-q9f2n Sep 7 08:51:22.737: INFO: Got endpoints: latency-svc-q9f2n [1.448174426s] Sep 7 08:51:22.784: INFO: Created: latency-svc-rtkrf Sep 7 08:51:22.815: INFO: Got endpoints: latency-svc-rtkrf [1.467348576s] Sep 7 08:51:22.832: INFO: Created: latency-svc-65ksm Sep 7 08:51:22.845: INFO: Got endpoints: latency-svc-65ksm [1.411550611s] Sep 7 08:51:22.874: INFO: Created: latency-svc-fbc9k Sep 7 08:51:22.882: INFO: Got endpoints: latency-svc-fbc9k [1.401587977s] Sep 7 08:51:22.905: INFO: Created: latency-svc-rhpq5 Sep 7 08:51:22.947: INFO: Got endpoints: latency-svc-rhpq5 [1.316860922s] Sep 7 08:51:22.952: INFO: Created: latency-svc-w9r2l Sep 7 08:51:22.973: INFO: Got endpoints: latency-svc-w9r2l [1.318756762s] Sep 7 08:51:23.001: INFO: Created: latency-svc-krh4v Sep 7 08:51:23.021: INFO: Got endpoints: latency-svc-krh4v [1.312242392s] Sep 7 08:51:23.047: INFO: Created: latency-svc-w66c9 Sep 7 08:51:23.085: INFO: Got endpoints: latency-svc-w66c9 [1.288507592s] Sep 7 08:51:23.101: INFO: Created: latency-svc-db7dh Sep 7 08:51:23.119: INFO: Got endpoints: latency-svc-db7dh [1.15968068s] Sep 7 08:51:23.150: INFO: Created: latency-svc-g64cz Sep 7 08:51:23.160: INFO: Got endpoints: latency-svc-g64cz [1.17899594s] Sep 7 08:51:23.217: INFO: Created: latency-svc-xn8ln Sep 7 08:51:23.221: INFO: Got endpoints: latency-svc-xn8ln [1.075438909s] Sep 7 08:51:23.252: INFO: Created: latency-svc-djl7q Sep 7 08:51:23.269: INFO: Got endpoints: latency-svc-djl7q [1.095728421s] Sep 7 08:51:23.289: INFO: Created: latency-svc-pzrcx Sep 7 08:51:23.306: INFO: Got endpoints: latency-svc-pzrcx [951.969858ms] Sep 7 08:51:23.383: INFO: Created: latency-svc-96tfr Sep 7 08:51:23.401: INFO: Got endpoints: latency-svc-96tfr [992.782443ms] Sep 7 08:51:23.426: INFO: Created: latency-svc-5qt5k Sep 7 08:51:23.443: INFO: Got endpoints: latency-svc-5qt5k [854.209723ms] Sep 7 08:51:23.498: INFO: Created: latency-svc-zrnkq Sep 7 08:51:23.511: INFO: Got endpoints: latency-svc-zrnkq [773.639417ms] Sep 7 08:51:23.547: INFO: Created: latency-svc-nptvn Sep 7 08:51:23.564: INFO: Got endpoints: latency-svc-nptvn [748.470405ms] Sep 7 08:51:23.587: INFO: Created: latency-svc-fm4qb Sep 7 08:51:23.624: INFO: Got endpoints: latency-svc-fm4qb [778.94213ms] Sep 7 08:51:23.634: INFO: Created: latency-svc-qb7xp Sep 7 08:51:23.648: INFO: Got endpoints: latency-svc-qb7xp [765.68387ms] Sep 7 08:51:23.696: INFO: Created: latency-svc-h2hjl Sep 7 08:51:23.721: INFO: Got endpoints: latency-svc-h2hjl [773.339518ms] Sep 7 08:51:23.768: INFO: Created: latency-svc-kwrlr Sep 7 08:51:23.792: INFO: Got endpoints: latency-svc-kwrlr [819.445991ms] Sep 7 08:51:23.827: INFO: Created: latency-svc-l2wdv Sep 7 08:51:23.845: INFO: Got endpoints: latency-svc-l2wdv [823.603567ms] Sep 7 08:51:23.910: INFO: Created: latency-svc-b7c79 Sep 7 08:51:23.913: INFO: Got endpoints: latency-svc-b7c79 [827.789888ms] Sep 7 08:51:23.946: INFO: Created: latency-svc-dp84q Sep 7 08:51:23.965: INFO: Got endpoints: latency-svc-dp84q [845.934443ms] Sep 7 08:51:23.989: INFO: Created: latency-svc-7tkgm Sep 7 08:51:24.055: INFO: Got endpoints: latency-svc-7tkgm [895.414682ms] Sep 7 08:51:24.057: INFO: Created: latency-svc-fzfkc Sep 7 08:51:24.080: INFO: Got endpoints: latency-svc-fzfkc [859.674955ms] Sep 7 08:51:24.111: INFO: Created: latency-svc-bm7b2 Sep 7 08:51:24.128: INFO: Got endpoints: latency-svc-bm7b2 [859.042066ms] Sep 7 08:51:24.230: INFO: Created: latency-svc-sdchz Sep 7 08:51:24.234: INFO: Got endpoints: latency-svc-sdchz [927.850097ms] Sep 7 08:51:24.303: INFO: Created: latency-svc-5slgj Sep 7 08:51:24.367: INFO: Got endpoints: latency-svc-5slgj [965.777881ms] Sep 7 08:51:24.372: INFO: Created: latency-svc-26k86 Sep 7 08:51:24.410: INFO: Got endpoints: latency-svc-26k86 [966.884111ms] Sep 7 08:51:24.439: INFO: Created: latency-svc-z4922 Sep 7 08:51:24.453: INFO: Got endpoints: latency-svc-z4922 [941.84802ms] Sep 7 08:51:24.505: INFO: Created: latency-svc-nqvvb Sep 7 08:51:24.513: INFO: Got endpoints: latency-svc-nqvvb [949.56971ms] Sep 7 08:51:24.536: INFO: Created: latency-svc-brrf9 Sep 7 08:51:24.572: INFO: Got endpoints: latency-svc-brrf9 [948.076497ms] Sep 7 08:51:24.649: INFO: Created: latency-svc-xf5wc Sep 7 08:51:24.669: INFO: Got endpoints: latency-svc-xf5wc [1.02176893s] Sep 7 08:51:24.693: INFO: Created: latency-svc-5ht5w Sep 7 08:51:24.711: INFO: Got endpoints: latency-svc-5ht5w [990.491241ms] Sep 7 08:51:24.792: INFO: Created: latency-svc-cv7zm Sep 7 08:51:24.796: INFO: Got endpoints: latency-svc-cv7zm [1.003732635s] Sep 7 08:51:24.830: INFO: Created: latency-svc-pwfpd Sep 7 08:51:24.843: INFO: Got endpoints: latency-svc-pwfpd [998.356974ms] Sep 7 08:51:24.866: INFO: Created: latency-svc-v5mjd Sep 7 08:51:24.880: INFO: Got endpoints: latency-svc-v5mjd [966.998296ms] Sep 7 08:51:24.930: INFO: Created: latency-svc-mmfj7 Sep 7 08:51:24.954: INFO: Got endpoints: latency-svc-mmfj7 [989.282328ms] Sep 7 08:51:24.997: INFO: Created: latency-svc-pbzsg Sep 7 08:51:25.013: INFO: Got endpoints: latency-svc-pbzsg [957.510281ms] Sep 7 08:51:25.073: INFO: Created: latency-svc-6m6zd Sep 7 08:51:25.085: INFO: Got endpoints: latency-svc-6m6zd [1.004138304s] Sep 7 08:51:25.130: INFO: Created: latency-svc-d2hm6 Sep 7 08:51:25.145: INFO: Got endpoints: latency-svc-d2hm6 [1.01739343s] Sep 7 08:51:25.211: INFO: Created: latency-svc-kntbj Sep 7 08:51:25.223: INFO: Got endpoints: latency-svc-kntbj [988.600803ms] Sep 7 08:51:25.248: INFO: Created: latency-svc-tlzvn Sep 7 08:51:25.265: INFO: Got endpoints: latency-svc-tlzvn [898.816968ms] Sep 7 08:51:25.284: INFO: Created: latency-svc-s8vw7 Sep 7 08:51:25.302: INFO: Got endpoints: latency-svc-s8vw7 [891.387171ms] Sep 7 08:51:25.349: INFO: Created: latency-svc-5zkms Sep 7 08:51:25.352: INFO: Got endpoints: latency-svc-5zkms [899.499623ms] Sep 7 08:51:25.430: INFO: Created: latency-svc-z2kh8 Sep 7 08:51:25.540: INFO: Got endpoints: latency-svc-z2kh8 [1.027161245s] Sep 7 08:51:25.580: INFO: Created: latency-svc-smb8l Sep 7 08:51:25.599: INFO: Got endpoints: latency-svc-smb8l [1.026823355s] Sep 7 08:51:25.684: INFO: Created: latency-svc-r8nxk Sep 7 08:51:25.689: INFO: Got endpoints: latency-svc-r8nxk [1.019685551s] Sep 7 08:51:25.722: INFO: Created: latency-svc-wfw4r Sep 7 08:51:25.740: INFO: Got endpoints: latency-svc-wfw4r [1.028642064s] Sep 7 08:51:25.828: INFO: Created: latency-svc-f44dm Sep 7 08:51:25.832: INFO: Got endpoints: latency-svc-f44dm [1.035660354s] Sep 7 08:51:25.886: INFO: Created: latency-svc-6g7lc Sep 7 08:51:25.903: INFO: Got endpoints: latency-svc-6g7lc [1.059271115s] Sep 7 08:51:25.968: INFO: Created: latency-svc-nj5f8 Sep 7 08:51:25.972: INFO: Got endpoints: latency-svc-nj5f8 [1.092087968s] Sep 7 08:51:26.004: INFO: Created: latency-svc-w4zwt Sep 7 08:51:26.014: INFO: Got endpoints: latency-svc-w4zwt [1.059004755s] Sep 7 08:51:26.040: INFO: Created: latency-svc-wkrdv Sep 7 08:51:26.049: INFO: Got endpoints: latency-svc-wkrdv [1.036752018s] Sep 7 08:51:26.097: INFO: Created: latency-svc-ggpqt Sep 7 08:51:26.100: INFO: Got endpoints: latency-svc-ggpqt [1.01571691s] Sep 7 08:51:26.131: INFO: Created: latency-svc-qllsj Sep 7 08:51:26.186: INFO: Got endpoints: latency-svc-qllsj [1.040659212s] Sep 7 08:51:26.244: INFO: Created: latency-svc-lq7wn Sep 7 08:51:26.255: INFO: Got endpoints: latency-svc-lq7wn [1.032018741s] Sep 7 08:51:26.298: INFO: Created: latency-svc-dtmr7 Sep 7 08:51:26.309: INFO: Got endpoints: latency-svc-dtmr7 [1.043207821s] Sep 7 08:51:26.373: INFO: Created: latency-svc-pm997 Sep 7 08:51:26.395: INFO: Got endpoints: latency-svc-pm997 [1.093610137s] Sep 7 08:51:26.432: INFO: Created: latency-svc-wnqgq Sep 7 08:51:26.447: INFO: Got endpoints: latency-svc-wnqgq [1.094976743s] Sep 7 08:51:26.511: INFO: Created: latency-svc-xm6jb Sep 7 08:51:26.514: INFO: Got endpoints: latency-svc-xm6jb [973.465772ms] Sep 7 08:51:26.580: INFO: Created: latency-svc-mdvgx Sep 7 08:51:26.592: INFO: Got endpoints: latency-svc-mdvgx [992.773576ms] Sep 7 08:51:26.652: INFO: Created: latency-svc-km87r Sep 7 08:51:26.696: INFO: Got endpoints: latency-svc-km87r [1.00644929s] Sep 7 08:51:26.793: INFO: Created: latency-svc-8xgsw Sep 7 08:51:26.796: INFO: Got endpoints: latency-svc-8xgsw [1.05622237s] Sep 7 08:51:26.825: INFO: Created: latency-svc-8687g Sep 7 08:51:26.844: INFO: Got endpoints: latency-svc-8687g [1.012451379s] Sep 7 08:51:26.869: INFO: Created: latency-svc-2tgwn Sep 7 08:51:26.891: INFO: Got endpoints: latency-svc-2tgwn [988.542636ms] Sep 7 08:51:26.950: INFO: Created: latency-svc-qcr87 Sep 7 08:51:26.959: INFO: Got endpoints: latency-svc-qcr87 [987.03333ms] Sep 7 08:51:26.984: INFO: Created: latency-svc-dvvxz Sep 7 08:51:27.001: INFO: Got endpoints: latency-svc-dvvxz [987.776561ms] Sep 7 08:51:27.027: INFO: Created: latency-svc-jfhlh Sep 7 08:51:27.044: INFO: Got endpoints: latency-svc-jfhlh [994.226013ms] Sep 7 08:51:27.091: INFO: Created: latency-svc-krbv4 Sep 7 08:51:27.120: INFO: Created: latency-svc-qwfkr Sep 7 08:51:27.120: INFO: Got endpoints: latency-svc-krbv4 [1.019624501s] Sep 7 08:51:27.137: INFO: Got endpoints: latency-svc-qwfkr [950.726427ms] Sep 7 08:51:27.167: INFO: Created: latency-svc-hlngt Sep 7 08:51:27.235: INFO: Got endpoints: latency-svc-hlngt [980.176196ms] Sep 7 08:51:27.259: INFO: Created: latency-svc-fpzpb Sep 7 08:51:27.269: INFO: Got endpoints: latency-svc-fpzpb [960.511265ms] Sep 7 08:51:27.295: INFO: Created: latency-svc-6cv6z Sep 7 08:51:27.313: INFO: Got endpoints: latency-svc-6cv6z [917.173981ms] Sep 7 08:51:27.379: INFO: Created: latency-svc-74vdd Sep 7 08:51:27.382: INFO: Got endpoints: latency-svc-74vdd [934.426095ms] Sep 7 08:51:27.413: INFO: Created: latency-svc-nj8cc Sep 7 08:51:27.432: INFO: Got endpoints: latency-svc-nj8cc [918.345514ms] Sep 7 08:51:27.461: INFO: Created: latency-svc-zwllk Sep 7 08:51:27.540: INFO: Got endpoints: latency-svc-zwllk [947.949322ms] Sep 7 08:51:27.543: INFO: Created: latency-svc-x88vw Sep 7 08:51:27.560: INFO: Got endpoints: latency-svc-x88vw [863.821141ms] Sep 7 08:51:27.583: INFO: Created: latency-svc-5jdm5 Sep 7 08:51:27.607: INFO: Got endpoints: latency-svc-5jdm5 [810.230314ms] Sep 7 08:51:27.684: INFO: Created: latency-svc-kzjfv Sep 7 08:51:27.713: INFO: Got endpoints: latency-svc-kzjfv [868.101051ms] Sep 7 08:51:27.749: INFO: Created: latency-svc-5qphf Sep 7 08:51:27.763: INFO: Got endpoints: latency-svc-5qphf [872.218574ms] Sep 7 08:51:27.828: INFO: Created: latency-svc-42ktt Sep 7 08:51:27.831: INFO: Got endpoints: latency-svc-42ktt [872.096811ms] Sep 7 08:51:27.895: INFO: Created: latency-svc-8xqwt Sep 7 08:51:27.907: INFO: Got endpoints: latency-svc-8xqwt [906.084733ms] Sep 7 08:51:27.978: INFO: Created: latency-svc-l6pgq Sep 7 08:51:27.992: INFO: Got endpoints: latency-svc-l6pgq [948.193532ms] Sep 7 08:51:28.013: INFO: Created: latency-svc-m9kh7 Sep 7 08:51:28.028: INFO: Got endpoints: latency-svc-m9kh7 [907.956766ms] Sep 7 08:51:28.053: INFO: Created: latency-svc-6t55r Sep 7 08:51:28.064: INFO: Got endpoints: latency-svc-6t55r [926.868911ms] Sep 7 08:51:28.110: INFO: Created: latency-svc-kfvd4 Sep 7 08:51:28.140: INFO: Got endpoints: latency-svc-kfvd4 [904.523371ms] Sep 7 08:51:28.183: INFO: Created: latency-svc-ldlm6 Sep 7 08:51:28.197: INFO: Got endpoints: latency-svc-ldlm6 [927.377584ms] Sep 7 08:51:28.283: INFO: Created: latency-svc-krc24 Sep 7 08:51:28.311: INFO: Got endpoints: latency-svc-krc24 [998.291367ms] Sep 7 08:51:28.332: INFO: Created: latency-svc-vw2s7 Sep 7 08:51:28.349: INFO: Got endpoints: latency-svc-vw2s7 [967.162333ms] Sep 7 08:51:28.374: INFO: Created: latency-svc-rj5rk Sep 7 08:51:28.422: INFO: Got endpoints: latency-svc-rj5rk [989.611784ms] Sep 7 08:51:28.464: INFO: Created: latency-svc-zhxxp Sep 7 08:51:28.474: INFO: Got endpoints: latency-svc-zhxxp [933.442282ms] Sep 7 08:51:28.499: INFO: Created: latency-svc-kxmgq Sep 7 08:51:28.546: INFO: Got endpoints: latency-svc-kxmgq [986.561958ms] Sep 7 08:51:28.552: INFO: Created: latency-svc-dzbvb Sep 7 08:51:28.597: INFO: Got endpoints: latency-svc-dzbvb [989.851892ms] Sep 7 08:51:28.709: INFO: Created: latency-svc-cxrf2 Sep 7 08:51:28.712: INFO: Got endpoints: latency-svc-cxrf2 [999.071116ms] Sep 7 08:51:28.764: INFO: Created: latency-svc-zx2td Sep 7 08:51:28.798: INFO: Got endpoints: latency-svc-zx2td [1.035006712s] Sep 7 08:51:28.852: INFO: Created: latency-svc-7dn5j Sep 7 08:51:28.859: INFO: Got endpoints: latency-svc-7dn5j [1.027834176s] Sep 7 08:51:28.889: INFO: Created: latency-svc-gj295 Sep 7 08:51:28.907: INFO: Got endpoints: latency-svc-gj295 [999.036621ms] Sep 7 08:51:28.939: INFO: Created: latency-svc-pkj7q Sep 7 08:51:28.990: INFO: Got endpoints: latency-svc-pkj7q [997.758951ms] Sep 7 08:51:28.998: INFO: Created: latency-svc-xzb9q Sep 7 08:51:29.012: INFO: Got endpoints: latency-svc-xzb9q [984.292636ms] Sep 7 08:51:29.062: INFO: Created: latency-svc-tr5dt Sep 7 08:51:29.277: INFO: Got endpoints: latency-svc-tr5dt [1.212813811s] Sep 7 08:51:29.290: INFO: Created: latency-svc-rz8fg Sep 7 08:51:29.308: INFO: Got endpoints: latency-svc-rz8fg [1.16806735s] Sep 7 08:51:29.502: INFO: Created: latency-svc-fmmmd Sep 7 08:51:29.522: INFO: Got endpoints: latency-svc-fmmmd [1.325332323s] Sep 7 08:51:29.522: INFO: Latencies: [39.698128ms 143.413265ms 259.032971ms 303.718215ms 404.654223ms 441.96916ms 530.217453ms 556.916196ms 605.1896ms 706.811695ms 748.470405ms 749.460125ms 765.68387ms 773.339518ms 773.639417ms 778.94213ms 810.230314ms 819.445991ms 823.603567ms 827.789888ms 845.934443ms 854.209723ms 859.042066ms 859.674955ms 863.821141ms 868.101051ms 870.926311ms 872.096811ms 872.218574ms 872.398986ms 875.544545ms 891.387171ms 895.414682ms 898.816968ms 899.499623ms 903.109658ms 904.523371ms 906.084733ms 907.956766ms 917.173981ms 918.345514ms 926.868911ms 927.377584ms 927.850097ms 928.251537ms 929.176132ms 933.442282ms 934.426095ms 941.846545ms 941.84802ms 947.949322ms 948.076497ms 948.193532ms 949.56971ms 950.726427ms 951.969858ms 957.510281ms 960.511265ms 961.438978ms 961.455998ms 964.998162ms 965.777881ms 966.884111ms 966.998296ms 967.162333ms 973.465772ms 973.801839ms 977.041719ms 980.176196ms 984.292636ms 986.561958ms 986.816038ms 987.03333ms 987.306371ms 987.776561ms 988.542636ms 988.600803ms 989.282328ms 989.611784ms 989.851892ms 990.491241ms 991.791815ms 992.773576ms 992.782443ms 994.226013ms 997.278612ms 997.758951ms 998.291367ms 998.356974ms 999.036621ms 999.071116ms 999.370089ms 1.003732635s 1.004138304s 1.00644929s 1.009248408s 1.012451379s 1.01571691s 1.01575351s 1.01739343s 1.019624501s 1.019685551s 1.02176893s 1.026823355s 1.027161245s 1.027834176s 1.028642064s 1.032018741s 1.035006712s 1.035660354s 1.036752018s 1.039342166s 1.040659212s 1.043207821s 1.044457617s 1.050035375s 1.05622237s 1.059004755s 1.059271115s 1.064478988s 1.066833681s 1.070280039s 1.071345794s 1.075438909s 1.081552445s 1.081978894s 1.083094933s 1.089322592s 1.092087968s 1.093610137s 1.094976743s 1.095728421s 1.096517914s 1.099889781s 1.101244567s 1.11269774s 1.123550196s 1.138475312s 1.144213265s 1.159622651s 1.15968068s 1.16384537s 1.164601459s 1.165645971s 1.16806735s 1.17309299s 1.178122307s 1.17899594s 1.188424429s 1.19469351s 1.197329611s 1.197715013s 1.200691857s 1.201236729s 1.205858051s 1.209502816s 1.212506897s 1.212813811s 1.220520905s 1.221788174s 1.232955081s 1.236422367s 1.23906381s 1.239399178s 1.241190432s 1.243443075s 1.246669686s 1.246923854s 1.249037163s 1.260947635s 1.273485907s 1.285774609s 1.286654023s 1.288507592s 1.292253423s 1.298071484s 1.310946937s 1.312242392s 1.316860922s 1.318756762s 1.325332323s 1.327052224s 1.359302733s 1.362885039s 1.364076013s 1.366264643s 1.366743247s 1.37112928s 1.374453904s 1.374552706s 1.382583762s 1.389243125s 1.390659155s 1.401587977s 1.407751658s 1.410936616s 1.411550611s 1.448174426s 1.467348576s 1.471521982s] Sep 7 08:51:29.522: INFO: 50 %ile: 1.019624501s Sep 7 08:51:29.522: INFO: 90 %ile: 1.325332323s Sep 7 08:51:29.522: INFO: 99 %ile: 1.467348576s Sep 7 08:51:29.522: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:51:29.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-220" for this suite. • [SLOW TEST:18.872 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":200,"skipped":3058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:51:29.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:51:40.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4650" for this suite. • [SLOW TEST:11.446 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":201,"skipped":3117,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:51:41.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6652 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6652 STEP: creating replication controller externalsvc in namespace services-6652 I0907 08:51:41.289542 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6652, replica count: 2 I0907 08:51:44.340167 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 08:51:47.340418 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 7 08:51:47.473: INFO: Creating new exec pod Sep 7 08:51:53.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-6652 execpod6ppxg -- /bin/sh -x -c nslookup clusterip-service.services-6652.svc.cluster.local' Sep 7 08:51:53.835: INFO: stderr: "I0907 08:51:53.758058 2644 log.go:181] (0xc000141290) (0xc00017e640) Create stream\nI0907 08:51:53.758121 2644 log.go:181] (0xc000141290) (0xc00017e640) Stream added, broadcasting: 1\nI0907 08:51:53.761420 2644 log.go:181] (0xc000141290) Reply frame received for 1\nI0907 08:51:53.761465 2644 log.go:181] (0xc000141290) (0xc000d06000) Create stream\nI0907 08:51:53.761489 2644 log.go:181] (0xc000141290) (0xc000d06000) Stream added, broadcasting: 3\nI0907 08:51:53.762467 2644 log.go:181] (0xc000141290) Reply frame received for 3\nI0907 08:51:53.762507 2644 log.go:181] (0xc000141290) (0xc000a2a5a0) Create stream\nI0907 08:51:53.762515 2644 log.go:181] (0xc000141290) (0xc000a2a5a0) Stream added, broadcasting: 5\nI0907 08:51:53.763373 2644 log.go:181] (0xc000141290) Reply frame received for 5\nI0907 08:51:53.819156 2644 log.go:181] (0xc000141290) Data frame received for 5\nI0907 08:51:53.819177 2644 log.go:181] (0xc000a2a5a0) (5) Data frame handling\nI0907 08:51:53.819185 2644 log.go:181] (0xc000a2a5a0) (5) Data frame sent\n+ nslookup clusterip-service.services-6652.svc.cluster.local\nI0907 08:51:53.826907 2644 log.go:181] (0xc000141290) Data frame received for 3\nI0907 08:51:53.826941 2644 log.go:181] (0xc000d06000) (3) Data frame handling\nI0907 08:51:53.826960 2644 log.go:181] (0xc000d06000) (3) Data frame sent\nI0907 08:51:53.828213 2644 log.go:181] (0xc000141290) Data frame received for 3\nI0907 08:51:53.828268 2644 log.go:181] (0xc000d06000) (3) Data frame handling\nI0907 08:51:53.828306 2644 log.go:181] (0xc000d06000) (3) Data frame sent\nI0907 08:51:53.828414 2644 log.go:181] (0xc000141290) Data frame received for 3\nI0907 08:51:53.828437 2644 log.go:181] (0xc000d06000) (3) Data frame handling\nI0907 08:51:53.828537 2644 log.go:181] (0xc000141290) Data frame received for 5\nI0907 08:51:53.828550 2644 log.go:181] (0xc000a2a5a0) (5) Data frame handling\nI0907 08:51:53.831135 2644 log.go:181] (0xc000141290) Data frame received for 1\nI0907 08:51:53.831155 2644 log.go:181] (0xc00017e640) (1) Data frame handling\nI0907 08:51:53.831164 2644 log.go:181] (0xc00017e640) (1) Data frame sent\nI0907 08:51:53.831173 2644 log.go:181] (0xc000141290) (0xc00017e640) Stream removed, broadcasting: 1\nI0907 08:51:53.831218 2644 log.go:181] (0xc000141290) Go away received\nI0907 08:51:53.831695 2644 log.go:181] (0xc000141290) (0xc00017e640) Stream removed, broadcasting: 1\nI0907 08:51:53.831710 2644 log.go:181] (0xc000141290) (0xc000d06000) Stream removed, broadcasting: 3\nI0907 08:51:53.831718 2644 log.go:181] (0xc000141290) (0xc000a2a5a0) Stream removed, broadcasting: 5\n" Sep 7 08:51:53.835: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6652.svc.cluster.local\tcanonical name = externalsvc.services-6652.svc.cluster.local.\nName:\texternalsvc.services-6652.svc.cluster.local\nAddress: 10.98.54.204\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6652, will wait for the garbage collector to delete the pods Sep 7 08:51:53.899: INFO: Deleting ReplicationController externalsvc took: 9.661218ms Sep 7 08:51:54.399: INFO: Terminating ReplicationController externalsvc pods took: 500.239104ms Sep 7 08:52:01.943: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:01.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6652" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.026 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":202,"skipped":3118,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:02.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 7 08:52:02.096: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 7 08:52:02.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1487' Sep 7 08:52:02.472: INFO: stderr: "" Sep 7 08:52:02.472: INFO: stdout: "service/agnhost-replica created\n" Sep 7 08:52:02.473: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 7 08:52:02.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1487' Sep 7 08:52:02.833: INFO: stderr: "" Sep 7 08:52:02.833: INFO: stdout: "service/agnhost-primary created\n" Sep 7 08:52:02.833: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 7 08:52:02.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1487' Sep 7 08:52:03.160: INFO: stderr: "" Sep 7 08:52:03.160: INFO: stdout: "service/frontend created\n" Sep 7 08:52:03.160: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 7 08:52:03.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1487' Sep 7 08:52:03.453: INFO: stderr: "" Sep 7 08:52:03.453: INFO: stdout: "deployment.apps/frontend created\n" Sep 7 08:52:03.453: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 7 08:52:03.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1487' Sep 7 08:52:03.791: INFO: stderr: "" Sep 7 08:52:03.791: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 7 08:52:03.792: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 7 08:52:03.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1487' Sep 7 08:52:04.096: INFO: stderr: "" Sep 7 08:52:04.096: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 7 08:52:04.096: INFO: Waiting for all frontend pods to be Running. Sep 7 08:52:14.146: INFO: Waiting for frontend to serve content. Sep 7 08:52:14.165: INFO: Trying to add a new entry to the guestbook. Sep 7 08:52:14.254: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 7 08:52:14.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1487' Sep 7 08:52:14.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:52:14.454: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 7 08:52:14.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1487' Sep 7 08:52:14.671: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:52:14.671: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 7 08:52:14.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1487' Sep 7 08:52:14.919: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:52:14.919: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 7 08:52:14.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1487' Sep 7 08:52:15.060: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:52:15.060: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 7 08:52:15.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1487' Sep 7 08:52:15.619: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:52:15.619: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 7 08:52:15.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1487' Sep 7 08:52:16.111: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 7 08:52:16.111: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:16.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1487" for this suite. • [SLOW TEST:14.199 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":203,"skipped":3121,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:16.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 7 08:52:17.048: INFO: Created pod &Pod{ObjectMeta:{dns-1131 dns-1131 /api/v1/namespaces/dns-1131/pods/dns-1131 f48e12d6-8aa6-4d1c-9b3f-838273fbd900 293511 0 2020-09-07 08:52:17 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-09-07 08:52:16 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mlc2c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mlc2c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mlc2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 7 08:52:17.260: INFO: The status of Pod dns-1131 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:52:19.319: INFO: The status of Pod dns-1131 is Pending, waiting for it to be Running (with Ready = true) Sep 7 08:52:21.263: INFO: The status of Pod dns-1131 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 7 08:52:21.263: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1131 PodName:dns-1131 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:52:21.263: INFO: >>> kubeConfig: /root/.kube/config I0907 08:52:21.295042 7 log.go:181] (0xc002affd90) (0xc00074b400) Create stream I0907 08:52:21.295064 7 log.go:181] (0xc002affd90) (0xc00074b400) Stream added, broadcasting: 1 I0907 08:52:21.297678 7 log.go:181] (0xc002affd90) Reply frame received for 1 I0907 08:52:21.297721 7 log.go:181] (0xc002affd90) (0xc00369d720) Create stream I0907 08:52:21.297729 7 log.go:181] (0xc002affd90) (0xc00369d720) Stream added, broadcasting: 3 I0907 08:52:21.298647 7 log.go:181] (0xc002affd90) Reply frame received for 3 I0907 08:52:21.298693 7 log.go:181] (0xc002affd90) (0xc003e5db80) Create stream I0907 08:52:21.298710 7 log.go:181] (0xc002affd90) (0xc003e5db80) Stream added, broadcasting: 5 I0907 08:52:21.299911 7 log.go:181] (0xc002affd90) Reply frame received for 5 I0907 08:52:21.402228 7 log.go:181] (0xc002affd90) Data frame received for 3 I0907 08:52:21.402282 7 log.go:181] (0xc00369d720) (3) Data frame handling I0907 08:52:21.402325 7 log.go:181] (0xc00369d720) (3) Data frame sent I0907 08:52:21.402952 7 log.go:181] (0xc002affd90) Data frame received for 5 I0907 08:52:21.402978 7 log.go:181] (0xc003e5db80) (5) Data frame handling I0907 08:52:21.403207 7 log.go:181] (0xc002affd90) Data frame received for 3 I0907 08:52:21.403239 7 log.go:181] (0xc00369d720) (3) Data frame handling I0907 08:52:21.405240 7 log.go:181] (0xc002affd90) Data frame received for 1 I0907 08:52:21.405285 7 log.go:181] (0xc00074b400) (1) Data frame handling I0907 08:52:21.405316 7 log.go:181] (0xc00074b400) (1) Data frame sent I0907 08:52:21.405336 7 log.go:181] (0xc002affd90) (0xc00074b400) Stream removed, broadcasting: 1 I0907 08:52:21.405352 7 log.go:181] (0xc002affd90) Go away received I0907 08:52:21.405472 7 log.go:181] (0xc002affd90) (0xc00074b400) Stream removed, broadcasting: 1 I0907 08:52:21.405494 7 log.go:181] (0xc002affd90) (0xc00369d720) Stream removed, broadcasting: 3 I0907 08:52:21.405509 7 log.go:181] (0xc002affd90) (0xc003e5db80) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Sep 7 08:52:21.405: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1131 PodName:dns-1131 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:52:21.405: INFO: >>> kubeConfig: /root/.kube/config I0907 08:52:21.466778 7 log.go:181] (0xc0036b2630) (0xc003fb20a0) Create stream I0907 08:52:21.466801 7 log.go:181] (0xc0036b2630) (0xc003fb20a0) Stream added, broadcasting: 1 I0907 08:52:21.468630 7 log.go:181] (0xc0036b2630) Reply frame received for 1 I0907 08:52:21.468668 7 log.go:181] (0xc0036b2630) (0xc00369d860) Create stream I0907 08:52:21.468681 7 log.go:181] (0xc0036b2630) (0xc00369d860) Stream added, broadcasting: 3 I0907 08:52:21.469493 7 log.go:181] (0xc0036b2630) Reply frame received for 3 I0907 08:52:21.469538 7 log.go:181] (0xc0036b2630) (0xc003e5dc20) Create stream I0907 08:52:21.469561 7 log.go:181] (0xc0036b2630) (0xc003e5dc20) Stream added, broadcasting: 5 I0907 08:52:21.470236 7 log.go:181] (0xc0036b2630) Reply frame received for 5 I0907 08:52:21.531774 7 log.go:181] (0xc0036b2630) Data frame received for 3 I0907 08:52:21.531797 7 log.go:181] (0xc00369d860) (3) Data frame handling I0907 08:52:21.531805 7 log.go:181] (0xc00369d860) (3) Data frame sent I0907 08:52:21.533529 7 log.go:181] (0xc0036b2630) Data frame received for 3 I0907 08:52:21.533578 7 log.go:181] (0xc00369d860) (3) Data frame handling I0907 08:52:21.533602 7 log.go:181] (0xc0036b2630) Data frame received for 5 I0907 08:52:21.533615 7 log.go:181] (0xc003e5dc20) (5) Data frame handling I0907 08:52:21.535217 7 log.go:181] (0xc0036b2630) Data frame received for 1 I0907 08:52:21.535253 7 log.go:181] (0xc003fb20a0) (1) Data frame handling I0907 08:52:21.535278 7 log.go:181] (0xc003fb20a0) (1) Data frame sent I0907 08:52:21.535303 7 log.go:181] (0xc0036b2630) (0xc003fb20a0) Stream removed, broadcasting: 1 I0907 08:52:21.535333 7 log.go:181] (0xc0036b2630) Go away received I0907 08:52:21.535450 7 log.go:181] (0xc0036b2630) (0xc003fb20a0) Stream removed, broadcasting: 1 I0907 08:52:21.535479 7 log.go:181] (0xc0036b2630) (0xc00369d860) Stream removed, broadcasting: 3 I0907 08:52:21.535495 7 log.go:181] (0xc0036b2630) (0xc003e5dc20) Stream removed, broadcasting: 5 Sep 7 08:52:21.535: INFO: Deleting pod dns-1131... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:21.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1131" for this suite. • [SLOW TEST:5.475 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":204,"skipped":3138,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:21.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:52:22.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08" in namespace "downward-api-7473" to be "Succeeded or Failed" Sep 7 08:52:23.175: INFO: Pod "downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08": Phase="Pending", Reason="", readiness=false. Elapsed: 179.264381ms Sep 7 08:52:25.186: INFO: Pod "downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190582341s Sep 7 08:52:27.230: INFO: Pod "downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234206399s STEP: Saw pod success Sep 7 08:52:27.230: INFO: Pod "downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08" satisfied condition "Succeeded or Failed" Sep 7 08:52:27.233: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08 container client-container: STEP: delete the pod Sep 7 08:52:27.369: INFO: Waiting for pod downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08 to disappear Sep 7 08:52:27.394: INFO: Pod downwardapi-volume-010c690b-aafc-40e9-87bc-a8539872ca08 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:27.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7473" for this suite. • [SLOW TEST:5.693 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:27.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-474f0b24-e531-48bd-bae5-8f4d493cfd3f STEP: Creating a pod to test consume configMaps Sep 7 08:52:27.514: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718" in namespace "projected-7912" to be "Succeeded or Failed" Sep 7 08:52:27.536: INFO: Pod "pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718": Phase="Pending", Reason="", readiness=false. Elapsed: 21.854552ms Sep 7 08:52:29.540: INFO: Pod "pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025878925s Sep 7 08:52:31.544: INFO: Pod "pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718": Phase="Running", Reason="", readiness=true. Elapsed: 4.030482277s Sep 7 08:52:33.549: INFO: Pod "pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034783911s STEP: Saw pod success Sep 7 08:52:33.549: INFO: Pod "pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718" satisfied condition "Succeeded or Failed" Sep 7 08:52:33.552: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718 container projected-configmap-volume-test: STEP: delete the pod Sep 7 08:52:33.598: INFO: Waiting for pod pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718 to disappear Sep 7 08:52:33.610: INFO: Pod pod-projected-configmaps-93eebcfc-0727-42f8-97a9-7b2da907a718 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:33.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7912" for this suite. • [SLOW TEST:6.213 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3188,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:33.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 7 08:52:39.749: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-857 PodName:pod-sharedvolume-a020b9e0-7988-4ac7-87b6-aa5e16c520ea ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 08:52:39.749: INFO: >>> kubeConfig: /root/.kube/config I0907 08:52:39.786722 7 log.go:181] (0xc00097e000) (0xc0027c7900) Create stream I0907 08:52:39.786749 7 log.go:181] (0xc00097e000) (0xc0027c7900) Stream added, broadcasting: 1 I0907 08:52:39.789080 7 log.go:181] (0xc00097e000) Reply frame received for 1 I0907 08:52:39.789118 7 log.go:181] (0xc00097e000) (0xc0027c79a0) Create stream I0907 08:52:39.789128 7 log.go:181] (0xc00097e000) (0xc0027c79a0) Stream added, broadcasting: 3 I0907 08:52:39.790423 7 log.go:181] (0xc00097e000) Reply frame received for 3 I0907 08:52:39.790480 7 log.go:181] (0xc00097e000) (0xc00383c780) Create stream I0907 08:52:39.790498 7 log.go:181] (0xc00097e000) (0xc00383c780) Stream added, broadcasting: 5 I0907 08:52:39.791648 7 log.go:181] (0xc00097e000) Reply frame received for 5 I0907 08:52:39.880904 7 log.go:181] (0xc00097e000) Data frame received for 5 I0907 08:52:39.880970 7 log.go:181] (0xc00383c780) (5) Data frame handling I0907 08:52:39.880995 7 log.go:181] (0xc00097e000) Data frame received for 3 I0907 08:52:39.881005 7 log.go:181] (0xc0027c79a0) (3) Data frame handling I0907 08:52:39.881017 7 log.go:181] (0xc0027c79a0) (3) Data frame sent I0907 08:52:39.881027 7 log.go:181] (0xc00097e000) Data frame received for 3 I0907 08:52:39.881031 7 log.go:181] (0xc0027c79a0) (3) Data frame handling I0907 08:52:39.882548 7 log.go:181] (0xc00097e000) Data frame received for 1 I0907 08:52:39.882601 7 log.go:181] (0xc0027c7900) (1) Data frame handling I0907 08:52:39.882625 7 log.go:181] (0xc0027c7900) (1) Data frame sent I0907 08:52:39.882643 7 log.go:181] (0xc00097e000) (0xc0027c7900) Stream removed, broadcasting: 1 I0907 08:52:39.882664 7 log.go:181] (0xc00097e000) Go away received I0907 08:52:39.882798 7 log.go:181] (0xc00097e000) (0xc0027c7900) Stream removed, broadcasting: 1 I0907 08:52:39.882837 7 log.go:181] (0xc00097e000) (0xc0027c79a0) Stream removed, broadcasting: 3 I0907 08:52:39.882870 7 log.go:181] (0xc00097e000) (0xc00383c780) Stream removed, broadcasting: 5 Sep 7 08:52:39.882: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:39.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-857" for this suite. • [SLOW TEST:6.277 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":207,"skipped":3195,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:39.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5553/configmap-test-f951e7c2-b584-40e2-a32b-fa0db8a88b9d STEP: Creating a pod to test consume configMaps Sep 7 08:52:40.003: INFO: Waiting up to 5m0s for pod "pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac" in namespace "configmap-5553" to be "Succeeded or Failed" Sep 7 08:52:40.013: INFO: Pod "pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 10.27827ms Sep 7 08:52:42.017: INFO: Pod "pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014716429s Sep 7 08:52:44.022: INFO: Pod "pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019171682s STEP: Saw pod success Sep 7 08:52:44.022: INFO: Pod "pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac" satisfied condition "Succeeded or Failed" Sep 7 08:52:44.025: INFO: Trying to get logs from node latest-worker pod pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac container env-test: STEP: delete the pod Sep 7 08:52:44.123: INFO: Waiting for pod pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac to disappear Sep 7 08:52:44.151: INFO: Pod pod-configmaps-233e731f-13f5-4121-9a4f-ead1c124c5ac no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:44.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5553" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:44.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-fa9f59ec-07a3-46d1-873b-29a1c15ebf38 STEP: Creating a pod to test consume secrets Sep 7 08:52:44.292: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54" in namespace "projected-2357" to be "Succeeded or Failed" Sep 7 08:52:44.332: INFO: Pod "pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54": Phase="Pending", Reason="", readiness=false. Elapsed: 39.584817ms Sep 7 08:52:46.336: INFO: Pod "pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043757314s Sep 7 08:52:48.341: INFO: Pod "pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04854839s STEP: Saw pod success Sep 7 08:52:48.341: INFO: Pod "pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54" satisfied condition "Succeeded or Failed" Sep 7 08:52:48.346: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54 container projected-secret-volume-test: STEP: delete the pod Sep 7 08:52:48.427: INFO: Waiting for pod pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54 to disappear Sep 7 08:52:48.430: INFO: Pod pod-projected-secrets-c65f2f34-8f9d-4b2f-aa8d-ac3741447f54 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:52:48.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2357" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3255,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:52:48.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2338 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-2338 Sep 7 08:52:48.579: INFO: Found 0 stateful pods, waiting for 1 Sep 7 08:52:58.585: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 7 08:52:58.641: INFO: Deleting all statefulset in ns statefulset-2338 Sep 7 08:52:58.751: INFO: Scaling statefulset ss to 0 Sep 7 08:53:18.832: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 08:53:18.835: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:53:18.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2338" for this suite. • [SLOW TEST:30.417 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":210,"skipped":3255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:53:18.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 7 08:53:18.952: INFO: Waiting up to 5m0s for pod "pod-960931dc-8325-43b3-bd2d-5bc76642a0bd" in namespace "emptydir-884" to be "Succeeded or Failed" Sep 7 08:53:18.967: INFO: Pod "pod-960931dc-8325-43b3-bd2d-5bc76642a0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.213427ms Sep 7 08:53:20.970: INFO: Pod "pod-960931dc-8325-43b3-bd2d-5bc76642a0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017767042s Sep 7 08:53:22.975: INFO: Pod "pod-960931dc-8325-43b3-bd2d-5bc76642a0bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022666109s STEP: Saw pod success Sep 7 08:53:22.975: INFO: Pod "pod-960931dc-8325-43b3-bd2d-5bc76642a0bd" satisfied condition "Succeeded or Failed" Sep 7 08:53:22.978: INFO: Trying to get logs from node latest-worker2 pod pod-960931dc-8325-43b3-bd2d-5bc76642a0bd container test-container: STEP: delete the pod Sep 7 08:53:23.071: INFO: Waiting for pod pod-960931dc-8325-43b3-bd2d-5bc76642a0bd to disappear Sep 7 08:53:23.102: INFO: Pod pod-960931dc-8325-43b3-bd2d-5bc76642a0bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:53:23.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-884" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3278,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:53:23.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:53:23.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8923" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":212,"skipped":3293,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:53:23.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:53:30.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3454" for this suite. • [SLOW TEST:7.218 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":213,"skipped":3304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:53:30.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-a37d663f-5859-428d-ad4f-c1c8f5d7aa3c in namespace container-probe-3102 Sep 7 08:53:34.803: INFO: Started pod busybox-a37d663f-5859-428d-ad4f-c1c8f5d7aa3c in namespace container-probe-3102 STEP: checking the pod's current state and verifying that restartCount is present Sep 7 08:53:34.895: INFO: Initial restart count of pod busybox-a37d663f-5859-428d-ad4f-c1c8f5d7aa3c is 0 Sep 7 08:54:31.052: INFO: Restart count of pod container-probe-3102/busybox-a37d663f-5859-428d-ad4f-c1c8f5d7aa3c is now 1 (56.156991363s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:54:31.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3102" for this suite. • [SLOW TEST:60.641 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":214,"skipped":3333,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:54:31.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 7 08:54:31.181: INFO: Waiting up to 5m0s for pod "pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837" in namespace "emptydir-8686" to be "Succeeded or Failed" Sep 7 08:54:31.205: INFO: Pod "pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837": Phase="Pending", Reason="", readiness=false. Elapsed: 24.21598ms Sep 7 08:54:33.211: INFO: Pod "pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029330514s Sep 7 08:54:35.215: INFO: Pod "pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033898576s STEP: Saw pod success Sep 7 08:54:35.215: INFO: Pod "pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837" satisfied condition "Succeeded or Failed" Sep 7 08:54:35.219: INFO: Trying to get logs from node latest-worker pod pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837 container test-container: STEP: delete the pod Sep 7 08:54:35.350: INFO: Waiting for pod pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837 to disappear Sep 7 08:54:35.380: INFO: Pod pod-fe4dcc26-6cf2-4d2f-ba4c-24af7c764837 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:54:35.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8686" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":215,"skipped":3339,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:54:35.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-5166030a-0072-4971-91b5-139447d366a6 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:54:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8504" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":216,"skipped":3360,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:54:35.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:54:52.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-509" for this suite. • [SLOW TEST:17.096 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":217,"skipped":3371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:54:52.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9175.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9175.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 130.101.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.101.130_udp@PTR;check="$$(dig +tcp +noall +answer +search 130.101.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.101.130_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9175.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9175.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 130.101.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.101.130_udp@PTR;check="$$(dig +tcp +noall +answer +search 130.101.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.101.130_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:55:00.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.957: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.960: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.962: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.980: INFO: Unable to read jessie_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.983: INFO: Unable to read jessie_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.985: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:00.987: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:01.011: INFO: Lookups using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 failed for: [wheezy_udp@dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_udp@dns-test-service.dns-9175.svc.cluster.local jessie_tcp@dns-test-service.dns-9175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local] Sep 7 08:55:06.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.020: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.024: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.053: INFO: Unable to read jessie_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.056: INFO: Unable to read jessie_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.059: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.062: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:06.078: INFO: Lookups using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 failed for: [wheezy_udp@dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_udp@dns-test-service.dns-9175.svc.cluster.local jessie_tcp@dns-test-service.dns-9175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local] Sep 7 08:55:11.017: INFO: Unable to read wheezy_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.025: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.049: INFO: Unable to read jessie_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.051: INFO: Unable to read jessie_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.053: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:11.071: INFO: Lookups using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 failed for: [wheezy_udp@dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_udp@dns-test-service.dns-9175.svc.cluster.local jessie_tcp@dns-test-service.dns-9175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local] Sep 7 08:55:16.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.020: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.023: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.026: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.049: INFO: Unable to read jessie_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.053: INFO: Unable to read jessie_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.057: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.060: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:16.078: INFO: Lookups using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 failed for: [wheezy_udp@dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_udp@dns-test-service.dns-9175.svc.cluster.local jessie_tcp@dns-test-service.dns-9175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local] Sep 7 08:55:21.015: INFO: Unable to read wheezy_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.018: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.021: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.024: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.045: INFO: Unable to read jessie_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.048: INFO: Unable to read jessie_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.051: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.054: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:21.074: INFO: Lookups using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 failed for: [wheezy_udp@dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_udp@dns-test-service.dns-9175.svc.cluster.local jessie_tcp@dns-test-service.dns-9175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local] Sep 7 08:55:26.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.020: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.023: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.052: INFO: Unable to read jessie_udp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.055: INFO: Unable to read jessie_tcp@dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.058: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.060: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local from pod dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5: the server could not find the requested resource (get pods dns-test-5d5973e6-5094-4160-975c-3a81f75866b5) Sep 7 08:55:26.077: INFO: Lookups using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 failed for: [wheezy_udp@dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@dns-test-service.dns-9175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_udp@dns-test-service.dns-9175.svc.cluster.local jessie_tcp@dns-test-service.dns-9175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9175.svc.cluster.local] Sep 7 08:55:31.139: INFO: DNS probes using dns-9175/dns-test-5d5973e6-5094-4160-975c-3a81f75866b5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:31.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9175" for this suite. • [SLOW TEST:39.377 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":218,"skipped":3412,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:31.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 7 08:55:36.639: INFO: Successfully updated pod "labelsupdatefb8ea919-73d5-4be8-8f42-f94440b966e9" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:40.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-328" for this suite. • [SLOW TEST:8.708 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":219,"skipped":3413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:40.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 7 08:55:40.775: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 7 08:55:45.780: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:45.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6667" for this suite. • [SLOW TEST:5.260 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":220,"skipped":3461,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:45.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:46.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2628" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":221,"skipped":3474,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:46.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 7 08:55:46.231: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:46.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1034" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":222,"skipped":3483,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:46.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 7 08:55:51.425: INFO: Successfully updated pod "pod-update-ca4ebb26-e961-470e-bc7f-9c1b7d24a2b3" STEP: verifying the updated pod is in kubernetes Sep 7 08:55:51.703: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:51.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9904" for this suite. • [SLOW TEST:5.390 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:51.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:55:52.289: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4b0b5b58-fcef-4dc6-a13a-2e1020fb9a76" in namespace "security-context-test-2967" to be "Succeeded or Failed" Sep 7 08:55:52.388: INFO: Pod "alpine-nnp-false-4b0b5b58-fcef-4dc6-a13a-2e1020fb9a76": Phase="Pending", Reason="", readiness=false. Elapsed: 99.477846ms Sep 7 08:55:54.448: INFO: Pod "alpine-nnp-false-4b0b5b58-fcef-4dc6-a13a-2e1020fb9a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159536456s Sep 7 08:55:56.508: INFO: Pod "alpine-nnp-false-4b0b5b58-fcef-4dc6-a13a-2e1020fb9a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219448254s Sep 7 08:55:56.508: INFO: Pod "alpine-nnp-false-4b0b5b58-fcef-4dc6-a13a-2e1020fb9a76" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:55:56.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2967" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3543,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:55:56.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:55:57.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72" in namespace "projected-5831" to be "Succeeded or Failed" Sep 7 08:55:57.920: INFO: Pod "downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72": Phase="Pending", Reason="", readiness=false. Elapsed: 718.529932ms Sep 7 08:55:59.923: INFO: Pod "downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722070223s Sep 7 08:56:01.927: INFO: Pod "downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.725418096s STEP: Saw pod success Sep 7 08:56:01.927: INFO: Pod "downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72" satisfied condition "Succeeded or Failed" Sep 7 08:56:01.929: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72 container client-container: STEP: delete the pod Sep 7 08:56:01.982: INFO: Waiting for pod downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72 to disappear Sep 7 08:56:01.998: INFO: Pod downwardapi-volume-61be37b5-7faa-4905-ad27-3afa8c72da72 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:01.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5831" for this suite. • [SLOW TEST:5.483 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":225,"skipped":3547,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:02.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8895.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8895.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8895.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8895.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8895.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8895.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 08:56:10.206: INFO: DNS probes using dns-8895/dns-test-003f26ee-8a75-4c3b-b4db-a58a27a5d704 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:10.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8895" for this suite. • [SLOW TEST:8.989 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":226,"skipped":3559,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:10.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:56:11.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:56:13.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065771, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065771, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065771, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065771, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:56:16.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:17.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8751" for this suite. STEP: Destroying namespace "webhook-8751-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.417 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":227,"skipped":3559,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:17.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b979d447-759a-456e-bbc0-b6006fc123db STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b979d447-759a-456e-bbc0-b6006fc123db STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:25.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6678" for this suite. • [SLOW TEST:8.217 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":228,"skipped":3560,"failed":0} [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:25.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 7 08:56:25.787: INFO: Waiting up to 5m0s for pod "downward-api-2eb10447-ac56-4f35-aa36-795f9a889977" in namespace "downward-api-1144" to be "Succeeded or Failed" Sep 7 08:56:25.790: INFO: Pod "downward-api-2eb10447-ac56-4f35-aa36-795f9a889977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.667227ms Sep 7 08:56:27.964: INFO: Pod "downward-api-2eb10447-ac56-4f35-aa36-795f9a889977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176479785s Sep 7 08:56:29.969: INFO: Pod "downward-api-2eb10447-ac56-4f35-aa36-795f9a889977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181520084s STEP: Saw pod success Sep 7 08:56:29.969: INFO: Pod "downward-api-2eb10447-ac56-4f35-aa36-795f9a889977" satisfied condition "Succeeded or Failed" Sep 7 08:56:29.971: INFO: Trying to get logs from node latest-worker2 pod downward-api-2eb10447-ac56-4f35-aa36-795f9a889977 container dapi-container: STEP: delete the pod Sep 7 08:56:30.073: INFO: Waiting for pod downward-api-2eb10447-ac56-4f35-aa36-795f9a889977 to disappear Sep 7 08:56:30.075: INFO: Pod downward-api-2eb10447-ac56-4f35-aa36-795f9a889977 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:30.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1144" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3560,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:30.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-b0ca86be-392c-4f5a-8546-dfca61b040c0 STEP: Creating a pod to test consume configMaps Sep 7 08:56:30.288: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746" in namespace "projected-8392" to be "Succeeded or Failed" Sep 7 08:56:30.293: INFO: Pod "pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516348ms Sep 7 08:56:32.471: INFO: Pod "pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182384999s Sep 7 08:56:34.474: INFO: Pod "pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185878996s STEP: Saw pod success Sep 7 08:56:34.474: INFO: Pod "pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746" satisfied condition "Succeeded or Failed" Sep 7 08:56:34.477: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746 container projected-configmap-volume-test: STEP: delete the pod Sep 7 08:56:34.513: INFO: Waiting for pod pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746 to disappear Sep 7 08:56:34.523: INFO: Pod pod-projected-configmaps-8c0c1c46-6044-4ce3-b066-39356a70f746 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:34.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8392" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3562,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:34.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 7 08:56:34.629: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:43.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7121" for this suite. • [SLOW TEST:8.708 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":231,"skipped":3576,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:43.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-40a930f8-1b5e-4b18-997e-bb19b61258ab STEP: Creating a pod to test consume configMaps Sep 7 08:56:43.379: INFO: Waiting up to 5m0s for pod "pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29" in namespace "configmap-5478" to be "Succeeded or Failed" Sep 7 08:56:43.425: INFO: Pod "pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29": Phase="Pending", Reason="", readiness=false. Elapsed: 45.674311ms Sep 7 08:56:45.428: INFO: Pod "pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049271682s Sep 7 08:56:47.433: INFO: Pod "pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053695692s STEP: Saw pod success Sep 7 08:56:47.433: INFO: Pod "pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29" satisfied condition "Succeeded or Failed" Sep 7 08:56:47.436: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29 container configmap-volume-test: STEP: delete the pod Sep 7 08:56:47.492: INFO: Waiting for pod pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29 to disappear Sep 7 08:56:47.505: INFO: Pod pod-configmaps-d21b1251-1218-4782-8f92-c74be7ef0d29 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:56:47.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5478" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":232,"skipped":3592,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:56:47.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:56:48.013: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:56:50.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065808, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065808, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065808, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065807, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 08:56:52.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065808, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065808, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065808, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065807, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:56:55.063: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:57:07.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9008" for this suite. STEP: Destroying namespace "webhook-9008-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.926 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":233,"skipped":3594,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:57:07.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-10e82fec-81f4-42e6-90c9-51f496514f0b STEP: Creating a pod to test consume configMaps Sep 7 08:57:07.524: INFO: Waiting up to 5m0s for pod "pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e" in namespace "configmap-1580" to be "Succeeded or Failed" Sep 7 08:57:07.550: INFO: Pod "pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.409253ms Sep 7 08:57:09.554: INFO: Pod "pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030038761s Sep 7 08:57:11.558: INFO: Pod "pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034111516s STEP: Saw pod success Sep 7 08:57:11.558: INFO: Pod "pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e" satisfied condition "Succeeded or Failed" Sep 7 08:57:11.561: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e container configmap-volume-test: STEP: delete the pod Sep 7 08:57:11.636: INFO: Waiting for pod pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e to disappear Sep 7 08:57:11.644: INFO: Pod pod-configmaps-bfde867a-4486-43be-8b48-a0640994288e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:57:11.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1580" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":3610,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:57:11.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:57:11.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1115" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":235,"skipped":3632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:57:11.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:57:11.857: INFO: Waiting up to 5m0s for pod "downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57" in namespace "downward-api-4137" to be "Succeeded or Failed" Sep 7 08:57:11.860: INFO: Pod "downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.233679ms Sep 7 08:57:13.865: INFO: Pod "downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008064496s Sep 7 08:57:15.898: INFO: Pod "downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041682181s STEP: Saw pod success Sep 7 08:57:15.898: INFO: Pod "downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57" satisfied condition "Succeeded or Failed" Sep 7 08:57:15.901: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57 container client-container: STEP: delete the pod Sep 7 08:57:16.229: INFO: Waiting for pod downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57 to disappear Sep 7 08:57:16.252: INFO: Pod downwardapi-volume-690f652d-185a-4722-8d6c-b503507edd57 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:57:16.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4137" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":236,"skipped":3658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:57:16.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0907 08:57:17.692912 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 7 08:58:19.765: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:58:19.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7166" for this suite. • [SLOW TEST:63.504 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":237,"skipped":3684,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:58:19.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-hpgx STEP: Creating a pod to test atomic-volume-subpath Sep 7 08:58:19.935: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hpgx" in namespace "subpath-1531" to be "Succeeded or Failed" Sep 7 08:58:19.969: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 33.841027ms Sep 7 08:58:21.973: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038091796s Sep 7 08:58:23.979: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 4.043912841s Sep 7 08:58:25.985: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 6.049491519s Sep 7 08:58:27.990: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 8.054779729s Sep 7 08:58:29.994: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 10.058619263s Sep 7 08:58:31.998: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 12.062923093s Sep 7 08:58:34.003: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 14.067988656s Sep 7 08:58:36.009: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 16.073731118s Sep 7 08:58:38.015: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 18.07945333s Sep 7 08:58:40.018: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 20.082675199s Sep 7 08:58:42.022: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Running", Reason="", readiness=true. Elapsed: 22.086899743s Sep 7 08:58:44.027: INFO: Pod "pod-subpath-test-downwardapi-hpgx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092068258s STEP: Saw pod success Sep 7 08:58:44.027: INFO: Pod "pod-subpath-test-downwardapi-hpgx" satisfied condition "Succeeded or Failed" Sep 7 08:58:44.031: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-hpgx container test-container-subpath-downwardapi-hpgx: STEP: delete the pod Sep 7 08:58:44.082: INFO: Waiting for pod pod-subpath-test-downwardapi-hpgx to disappear Sep 7 08:58:44.095: INFO: Pod pod-subpath-test-downwardapi-hpgx no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hpgx Sep 7 08:58:44.095: INFO: Deleting pod "pod-subpath-test-downwardapi-hpgx" in namespace "subpath-1531" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:58:44.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1531" for this suite. • [SLOW TEST:24.330 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":238,"skipped":3690,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:58:44.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 7 08:58:44.188: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 7 08:58:44.208: INFO: Waiting for terminating namespaces to be deleted... Sep 7 08:58:44.211: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 7 08:58:44.216: INFO: kindnet-d72xf from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 08:58:44.216: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 08:58:44.216: INFO: kube-proxy-64mm6 from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 08:58:44.216: INFO: Container kube-proxy ready: true, restart count 0 Sep 7 08:58:44.216: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 7 08:58:44.221: INFO: kindnet-dktmm from kube-system started at 2020-09-06 13:49:16 +0000 UTC (1 container statuses recorded) Sep 7 08:58:44.221: INFO: Container kindnet-cni ready: true, restart count 0 Sep 7 08:58:44.221: INFO: kube-proxy-b55gf from kube-system started at 2020-09-06 13:49:14 +0000 UTC (1 container statuses recorded) Sep 7 08:58:44.221: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Sep 7 08:58:44.314: INFO: Pod kindnet-d72xf requesting resource cpu=100m on Node latest-worker Sep 7 08:58:44.314: INFO: Pod kindnet-dktmm requesting resource cpu=100m on Node latest-worker2 Sep 7 08:58:44.314: INFO: Pod kube-proxy-64mm6 requesting resource cpu=0m on Node latest-worker Sep 7 08:58:44.314: INFO: Pod kube-proxy-b55gf requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Sep 7 08:58:44.314: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Sep 7 08:58:44.320: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b.163274b1b91f0f8c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6707/filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-838d3034-2227-4be3-8684-fe45d443a190.163274b1b995f8e0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6707/filler-pod-838d3034-2227-4be3-8684-fe45d443a190 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b.163274b28a437672], Reason = [Created], Message = [Created container filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b] STEP: Considering event: Type = [Normal], Name = [filler-pod-838d3034-2227-4be3-8684-fe45d443a190.163274b2787a76ac], Reason = [Started], Message = [Started container filler-pod-838d3034-2227-4be3-8684-fe45d443a190] STEP: Considering event: Type = [Normal], Name = [filler-pod-838d3034-2227-4be3-8684-fe45d443a190.163274b20641af9c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b.163274b2482c04b4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-838d3034-2227-4be3-8684-fe45d443a190.163274b257150dc8], Reason = [Created], Message = [Created container filler-pod-838d3034-2227-4be3-8684-fe45d443a190] STEP: Considering event: Type = [Normal], Name = [filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b.163274b29a169569], Reason = [Started], Message = [Started container filler-pod-db435bb8-f9b1-4492-a774-87fadc57682b] STEP: Considering event: Type = [Warning], Name = [additional-pod.163274b3206a5490], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:58:51.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6707" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.341 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":239,"skipped":3690,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:58:51.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:58:51.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda" in namespace "projected-2145" to be "Succeeded or Failed" Sep 7 08:58:51.565: INFO: Pod "downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda": Phase="Pending", Reason="", readiness=false. Elapsed: 15.901737ms Sep 7 08:58:53.570: INFO: Pod "downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020281394s Sep 7 08:58:55.574: INFO: Pod "downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024965727s STEP: Saw pod success Sep 7 08:58:55.574: INFO: Pod "downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda" satisfied condition "Succeeded or Failed" Sep 7 08:58:55.578: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda container client-container: STEP: delete the pod Sep 7 08:58:55.597: INFO: Waiting for pod downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda to disappear Sep 7 08:58:55.601: INFO: Pod downwardapi-volume-453e2fa3-4977-4071-8647-7286748ecdda no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:58:55.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2145" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3696,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:58:55.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:58:55.731: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d8479d5a-9314-4a54-a41d-a23e96228541" in namespace "security-context-test-1523" to be "Succeeded or Failed" Sep 7 08:58:55.749: INFO: Pod "busybox-privileged-false-d8479d5a-9314-4a54-a41d-a23e96228541": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043474ms Sep 7 08:58:58.200: INFO: Pod "busybox-privileged-false-d8479d5a-9314-4a54-a41d-a23e96228541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46916439s Sep 7 08:59:00.204: INFO: Pod "busybox-privileged-false-d8479d5a-9314-4a54-a41d-a23e96228541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.473829585s Sep 7 08:59:00.204: INFO: Pod "busybox-privileged-false-d8479d5a-9314-4a54-a41d-a23e96228541" satisfied condition "Succeeded or Failed" Sep 7 08:59:00.227: INFO: Got logs for pod "busybox-privileged-false-d8479d5a-9314-4a54-a41d-a23e96228541": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:00.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1523" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3713,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:00.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f4829734-fba0-49b1-9570-a4ab849d0922 STEP: Creating a pod to test consume secrets Sep 7 08:59:00.362: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61" in namespace "projected-65" to be "Succeeded or Failed" Sep 7 08:59:00.366: INFO: Pod "pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984276ms Sep 7 08:59:02.369: INFO: Pod "pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007678122s Sep 7 08:59:04.390: INFO: Pod "pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028542647s STEP: Saw pod success Sep 7 08:59:04.390: INFO: Pod "pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61" satisfied condition "Succeeded or Failed" Sep 7 08:59:04.393: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61 container projected-secret-volume-test: STEP: delete the pod Sep 7 08:59:04.433: INFO: Waiting for pod pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61 to disappear Sep 7 08:59:04.439: INFO: Pod pod-projected-secrets-7cde0f14-cd48-48ca-8774-116ea46b0c61 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:04.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-65" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":242,"skipped":3727,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:04.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 7 08:59:04.524: INFO: Waiting up to 5m0s for pod "client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00" in namespace "containers-5990" to be "Succeeded or Failed" Sep 7 08:59:04.529: INFO: Pod "client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00": Phase="Pending", Reason="", readiness=false. Elapsed: 5.087058ms Sep 7 08:59:06.582: INFO: Pod "client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057873613s Sep 7 08:59:08.586: INFO: Pod "client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061967156s STEP: Saw pod success Sep 7 08:59:08.586: INFO: Pod "client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00" satisfied condition "Succeeded or Failed" Sep 7 08:59:08.589: INFO: Trying to get logs from node latest-worker pod client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00 container test-container: STEP: delete the pod Sep 7 08:59:08.618: INFO: Waiting for pod client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00 to disappear Sep 7 08:59:08.637: INFO: Pod client-containers-61c427e0-3f01-4c56-9b9c-c1efa4cf3f00 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:08.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5990" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":3758,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:08.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 7 08:59:08.915: INFO: Waiting up to 5m0s for pod "pod-00391f6f-f230-4c83-97de-0634f187bc49" in namespace "emptydir-5421" to be "Succeeded or Failed" Sep 7 08:59:08.919: INFO: Pod "pod-00391f6f-f230-4c83-97de-0634f187bc49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.513199ms Sep 7 08:59:10.924: INFO: Pod "pod-00391f6f-f230-4c83-97de-0634f187bc49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008992909s Sep 7 08:59:12.947: INFO: Pod "pod-00391f6f-f230-4c83-97de-0634f187bc49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032366076s STEP: Saw pod success Sep 7 08:59:12.947: INFO: Pod "pod-00391f6f-f230-4c83-97de-0634f187bc49" satisfied condition "Succeeded or Failed" Sep 7 08:59:12.950: INFO: Trying to get logs from node latest-worker2 pod pod-00391f6f-f230-4c83-97de-0634f187bc49 container test-container: STEP: delete the pod Sep 7 08:59:12.990: INFO: Waiting for pod pod-00391f6f-f230-4c83-97de-0634f187bc49 to disappear Sep 7 08:59:13.007: INFO: Pod pod-00391f6f-f230-4c83-97de-0634f187bc49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:13.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5421" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":3766,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:13.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 7 08:59:19.819: INFO: Successfully updated pod "adopt-release-8lf8k" STEP: Checking that the Job readopts the Pod Sep 7 08:59:19.819: INFO: Waiting up to 15m0s for pod "adopt-release-8lf8k" in namespace "job-9847" to be "adopted" Sep 7 08:59:19.878: INFO: Pod "adopt-release-8lf8k": Phase="Running", Reason="", readiness=true. Elapsed: 59.57626ms Sep 7 08:59:21.882: INFO: Pod "adopt-release-8lf8k": Phase="Running", Reason="", readiness=true. Elapsed: 2.063219659s Sep 7 08:59:21.882: INFO: Pod "adopt-release-8lf8k" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 7 08:59:22.394: INFO: Successfully updated pod "adopt-release-8lf8k" STEP: Checking that the Job releases the Pod Sep 7 08:59:22.394: INFO: Waiting up to 15m0s for pod "adopt-release-8lf8k" in namespace "job-9847" to be "released" Sep 7 08:59:22.405: INFO: Pod "adopt-release-8lf8k": Phase="Running", Reason="", readiness=true. Elapsed: 10.953063ms Sep 7 08:59:24.408: INFO: Pod "adopt-release-8lf8k": Phase="Running", Reason="", readiness=true. Elapsed: 2.014221118s Sep 7 08:59:24.408: INFO: Pod "adopt-release-8lf8k" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:24.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9847" for this suite. • [SLOW TEST:11.400 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":245,"skipped":3801,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:24.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 08:59:28.723: INFO: Waiting up to 5m0s for pod "client-envvars-46324272-beee-43c6-a799-95d44bc91032" in namespace "pods-555" to be "Succeeded or Failed" Sep 7 08:59:28.746: INFO: Pod "client-envvars-46324272-beee-43c6-a799-95d44bc91032": Phase="Pending", Reason="", readiness=false. Elapsed: 22.892979ms Sep 7 08:59:30.751: INFO: Pod "client-envvars-46324272-beee-43c6-a799-95d44bc91032": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027704417s Sep 7 08:59:32.754: INFO: Pod "client-envvars-46324272-beee-43c6-a799-95d44bc91032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031151173s STEP: Saw pod success Sep 7 08:59:32.754: INFO: Pod "client-envvars-46324272-beee-43c6-a799-95d44bc91032" satisfied condition "Succeeded or Failed" Sep 7 08:59:32.757: INFO: Trying to get logs from node latest-worker pod client-envvars-46324272-beee-43c6-a799-95d44bc91032 container env3cont: STEP: delete the pod Sep 7 08:59:32.921: INFO: Waiting for pod client-envvars-46324272-beee-43c6-a799-95d44bc91032 to disappear Sep 7 08:59:32.930: INFO: Pod client-envvars-46324272-beee-43c6-a799-95d44bc91032 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:32.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-555" for this suite. • [SLOW TEST:8.520 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":3822,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:32.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 7 08:59:33.105: INFO: Waiting up to 5m0s for pod "pod-d846af23-86ea-4c9f-9c6f-29754bf45de9" in namespace "emptydir-1349" to be "Succeeded or Failed" Sep 7 08:59:33.116: INFO: Pod "pod-d846af23-86ea-4c9f-9c6f-29754bf45de9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.48562ms Sep 7 08:59:35.120: INFO: Pod "pod-d846af23-86ea-4c9f-9c6f-29754bf45de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014833962s Sep 7 08:59:37.126: INFO: Pod "pod-d846af23-86ea-4c9f-9c6f-29754bf45de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020720112s STEP: Saw pod success Sep 7 08:59:37.126: INFO: Pod "pod-d846af23-86ea-4c9f-9c6f-29754bf45de9" satisfied condition "Succeeded or Failed" Sep 7 08:59:37.129: INFO: Trying to get logs from node latest-worker pod pod-d846af23-86ea-4c9f-9c6f-29754bf45de9 container test-container: STEP: delete the pod Sep 7 08:59:37.176: INFO: Waiting for pod pod-d846af23-86ea-4c9f-9c6f-29754bf45de9 to disappear Sep 7 08:59:37.187: INFO: Pod pod-d846af23-86ea-4c9f-9c6f-29754bf45de9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:37.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1349" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":247,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:37.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:48.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4922" for this suite. • [SLOW TEST:11.165 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":248,"skipped":3855,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:48.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 08:59:48.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 08:59:50.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065988, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065988, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065988, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735065988, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 08:59:53.954: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:53.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7285" for this suite. STEP: Destroying namespace "webhook-7285-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.716 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":249,"skipped":3870,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:54.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 08:59:54.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6" in namespace "downward-api-4272" to be "Succeeded or Failed" Sep 7 08:59:54.152: INFO: Pod "downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.892863ms Sep 7 08:59:56.157: INFO: Pod "downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008584938s Sep 7 08:59:58.191: INFO: Pod "downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041989867s STEP: Saw pod success Sep 7 08:59:58.191: INFO: Pod "downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6" satisfied condition "Succeeded or Failed" Sep 7 08:59:58.193: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6 container client-container: STEP: delete the pod Sep 7 08:59:58.345: INFO: Waiting for pod downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6 to disappear Sep 7 08:59:58.352: INFO: Pod downwardapi-volume-2261700e-32f2-46df-a871-78e0d152d7d6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 08:59:58.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4272" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":3889,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 08:59:58.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 7 09:00:05.230: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9640 pod-service-account-b382fd2d-0b06-4975-95f1-4350c13f5ea3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 7 09:00:10.377: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9640 pod-service-account-b382fd2d-0b06-4975-95f1-4350c13f5ea3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 7 09:00:10.590: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9640 pod-service-account-b382fd2d-0b06-4975-95f1-4350c13f5ea3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:00:10.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9640" for this suite. • [SLOW TEST:12.463 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":251,"skipped":3889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:00:10.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 09:00:11.828: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 09:00:14.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066011, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066011, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066011, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066011, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 09:00:17.084: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:00:17.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-183" for this suite. STEP: Destroying namespace "webhook-183-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.554 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":252,"skipped":3923,"failed":0} [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:00:17.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:00:21.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-184" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":253,"skipped":3923,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:00:21.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-e43f7969-650b-4244-8e7c-38100244bc2b in namespace container-probe-2249 Sep 7 09:00:25.669: INFO: Started pod test-webserver-e43f7969-650b-4244-8e7c-38100244bc2b in namespace container-probe-2249 STEP: checking the pod's current state and verifying that restartCount is present Sep 7 09:00:25.672: INFO: Initial restart count of pod test-webserver-e43f7969-650b-4244-8e7c-38100244bc2b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:04:26.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2249" for this suite. • [SLOW TEST:244.929 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":3935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:04:26.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 09:04:26.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a" in namespace "projected-2920" to be "Succeeded or Failed" Sep 7 09:04:26.976: INFO: Pod "downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a": Phase="Pending", Reason="", readiness=false. Elapsed: 156.959742ms Sep 7 09:04:28.983: INFO: Pod "downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164528597s Sep 7 09:04:30.987: INFO: Pod "downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168933259s STEP: Saw pod success Sep 7 09:04:30.988: INFO: Pod "downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a" satisfied condition "Succeeded or Failed" Sep 7 09:04:30.991: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a container client-container: STEP: delete the pod Sep 7 09:04:31.091: INFO: Waiting for pod downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a to disappear Sep 7 09:04:31.098: INFO: Pod downwardapi-volume-68705a62-8a7b-46d7-ae6e-79ca80dc815a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:04:31.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2920" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":3966,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:04:31.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 7 09:04:31.316: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 7 09:04:31.320: INFO: starting watch STEP: patching STEP: updating Sep 7 09:04:31.346: INFO: waiting for watch events with expected annotations Sep 7 09:04:31.346: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:04:31.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-9249" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":256,"skipped":3980,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:04:31.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4154 STEP: creating service affinity-clusterip in namespace services-4154 STEP: creating replication controller affinity-clusterip in namespace services-4154 I0907 09:04:31.676384 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4154, replica count: 3 I0907 09:04:34.726810 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 09:04:37.727101 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 09:04:37.733: INFO: Creating new exec pod Sep 7 09:04:42.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-4154 execpod-affinityzcm2k -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 7 09:04:43.039: INFO: stderr: "I0907 09:04:42.952857 2956 log.go:181] (0xc0007bcf20) (0xc0007b4640) Create stream\nI0907 09:04:42.952912 2956 log.go:181] (0xc0007bcf20) (0xc0007b4640) Stream added, broadcasting: 1\nI0907 09:04:42.957971 2956 log.go:181] (0xc0007bcf20) Reply frame received for 1\nI0907 09:04:42.958021 2956 log.go:181] (0xc0007bcf20) (0xc000459400) Create stream\nI0907 09:04:42.958035 2956 log.go:181] (0xc0007bcf20) (0xc000459400) Stream added, broadcasting: 3\nI0907 09:04:42.959075 2956 log.go:181] (0xc0007bcf20) Reply frame received for 3\nI0907 09:04:42.959113 2956 log.go:181] (0xc0007bcf20) (0xc0007b4000) Create stream\nI0907 09:04:42.959125 2956 log.go:181] (0xc0007bcf20) (0xc0007b4000) Stream added, broadcasting: 5\nI0907 09:04:42.960354 2956 log.go:181] (0xc0007bcf20) Reply frame received for 5\nI0907 09:04:43.032300 2956 log.go:181] (0xc0007bcf20) Data frame received for 3\nI0907 09:04:43.032328 2956 log.go:181] (0xc000459400) (3) Data frame handling\nI0907 09:04:43.032655 2956 log.go:181] (0xc0007bcf20) Data frame received for 5\nI0907 09:04:43.032684 2956 log.go:181] (0xc0007b4000) (5) Data frame handling\nI0907 09:04:43.032703 2956 log.go:181] (0xc0007b4000) (5) Data frame sent\nI0907 09:04:43.032723 2956 log.go:181] (0xc0007bcf20) Data frame received for 5\nI0907 09:04:43.032734 2956 log.go:181] (0xc0007b4000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0907 09:04:43.034502 2956 log.go:181] (0xc0007bcf20) Data frame received for 1\nI0907 09:04:43.034522 2956 log.go:181] (0xc0007b4640) (1) Data frame handling\nI0907 09:04:43.034540 2956 log.go:181] (0xc0007b4640) (1) Data frame sent\nI0907 09:04:43.034673 2956 log.go:181] (0xc0007bcf20) (0xc0007b4640) Stream removed, broadcasting: 1\nI0907 09:04:43.034706 2956 log.go:181] (0xc0007bcf20) Go away received\nI0907 09:04:43.035270 2956 log.go:181] (0xc0007bcf20) (0xc0007b4640) Stream removed, broadcasting: 1\nI0907 09:04:43.035308 2956 log.go:181] (0xc0007bcf20) (0xc000459400) Stream removed, broadcasting: 3\nI0907 09:04:43.035328 2956 log.go:181] (0xc0007bcf20) (0xc0007b4000) Stream removed, broadcasting: 5\n" Sep 7 09:04:43.039: INFO: stdout: "" Sep 7 09:04:43.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-4154 execpod-affinityzcm2k -- /bin/sh -x -c nc -zv -t -w 2 10.100.209.104 80' Sep 7 09:04:43.264: INFO: stderr: "I0907 09:04:43.178450 2975 log.go:181] (0xc000143600) (0xc0005528c0) Create stream\nI0907 09:04:43.178500 2975 log.go:181] (0xc000143600) (0xc0005528c0) Stream added, broadcasting: 1\nI0907 09:04:43.184308 2975 log.go:181] (0xc000143600) Reply frame received for 1\nI0907 09:04:43.184344 2975 log.go:181] (0xc000143600) (0xc000bec0a0) Create stream\nI0907 09:04:43.184354 2975 log.go:181] (0xc000143600) (0xc000bec0a0) Stream added, broadcasting: 3\nI0907 09:04:43.185362 2975 log.go:181] (0xc000143600) Reply frame received for 3\nI0907 09:04:43.185390 2975 log.go:181] (0xc000143600) (0xc000552000) Create stream\nI0907 09:04:43.185398 2975 log.go:181] (0xc000143600) (0xc000552000) Stream added, broadcasting: 5\nI0907 09:04:43.186399 2975 log.go:181] (0xc000143600) Reply frame received for 5\nI0907 09:04:43.255527 2975 log.go:181] (0xc000143600) Data frame received for 5\nI0907 09:04:43.255566 2975 log.go:181] (0xc000552000) (5) Data frame handling\nI0907 09:04:43.255600 2975 log.go:181] (0xc000552000) (5) Data frame sent\nI0907 09:04:43.255619 2975 log.go:181] (0xc000143600) Data frame received for 5\nI0907 09:04:43.255639 2975 log.go:181] (0xc000552000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.209.104 80\nConnection to 10.100.209.104 80 port [tcp/http] succeeded!\nI0907 09:04:43.255882 2975 log.go:181] (0xc000143600) Data frame received for 3\nI0907 09:04:43.255915 2975 log.go:181] (0xc000bec0a0) (3) Data frame handling\nI0907 09:04:43.259248 2975 log.go:181] (0xc000143600) Data frame received for 1\nI0907 09:04:43.259279 2975 log.go:181] (0xc0005528c0) (1) Data frame handling\nI0907 09:04:43.259301 2975 log.go:181] (0xc0005528c0) (1) Data frame sent\nI0907 09:04:43.259393 2975 log.go:181] (0xc000143600) (0xc0005528c0) Stream removed, broadcasting: 1\nI0907 09:04:43.259421 2975 log.go:181] (0xc000143600) Go away received\nI0907 09:04:43.259834 2975 log.go:181] (0xc000143600) (0xc0005528c0) Stream removed, broadcasting: 1\nI0907 09:04:43.259874 2975 log.go:181] (0xc000143600) (0xc000bec0a0) Stream removed, broadcasting: 3\nI0907 09:04:43.259903 2975 log.go:181] (0xc000143600) (0xc000552000) Stream removed, broadcasting: 5\n" Sep 7 09:04:43.264: INFO: stdout: "" Sep 7 09:04:43.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-4154 execpod-affinityzcm2k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.209.104:80/ ; done' Sep 7 09:04:43.571: INFO: stderr: "I0907 09:04:43.399830 2993 log.go:181] (0xc000eb8f20) (0xc0000e6820) Create stream\nI0907 09:04:43.399884 2993 log.go:181] (0xc000eb8f20) (0xc0000e6820) Stream added, broadcasting: 1\nI0907 09:04:43.404967 2993 log.go:181] (0xc000eb8f20) Reply frame received for 1\nI0907 09:04:43.405011 2993 log.go:181] (0xc000eb8f20) (0xc000c40000) Create stream\nI0907 09:04:43.405024 2993 log.go:181] (0xc000eb8f20) (0xc000c40000) Stream added, broadcasting: 3\nI0907 09:04:43.405968 2993 log.go:181] (0xc000eb8f20) Reply frame received for 3\nI0907 09:04:43.406008 2993 log.go:181] (0xc000eb8f20) (0xc000c400a0) Create stream\nI0907 09:04:43.406021 2993 log.go:181] (0xc000eb8f20) (0xc000c400a0) Stream added, broadcasting: 5\nI0907 09:04:43.406891 2993 log.go:181] (0xc000eb8f20) Reply frame received for 5\nI0907 09:04:43.466885 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.466936 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.466952 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.466965 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.466976 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.467005 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.471014 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.471051 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.471078 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.471393 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.471418 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.471431 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.471460 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.471471 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.471488 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.476976 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.476996 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.477010 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.477668 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.477705 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.477737 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.477772 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.477793 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.477812 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.485012 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.485054 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.485091 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.485558 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.485590 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.485618 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.485683 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.485699 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.485718 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.489940 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.489967 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.489986 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.490492 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.490511 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.490530 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.490563 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.490579 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.490602 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\nI0907 09:04:43.497983 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.498003 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.498023 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.498695 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.498746 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.498775 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.498814 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.498836 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.498854 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.502809 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.502827 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.502845 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.503381 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.503422 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.503471 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.503516 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.503560 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.503589 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.509979 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.509992 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.509998 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.510890 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.510914 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.510931 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.510949 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.510959 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.510969 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.518751 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.518769 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.518789 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.519237 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.519340 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.519354 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.519366 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.519371 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.519376 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\nI0907 09:04:43.519381 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.519386 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.519396 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\nI0907 09:04:43.524765 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.524779 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.524789 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.525375 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.525393 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.525409 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.525434 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.525447 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.525457 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.529301 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.529316 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.529330 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.529875 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.529893 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.529909 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.529931 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.529954 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.529972 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.534474 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.534490 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.534508 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.534971 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.534996 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.535029 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.535053 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.535068 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.535086 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.539618 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.539642 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.539665 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.540238 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.540262 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.540278 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.540301 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.540331 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.540350 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.544587 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.544603 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.544619 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.544936 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.544975 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.544996 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.545026 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.545046 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.545074 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.553463 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.553492 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.553510 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.554204 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.554236 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.554259 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.554276 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.554285 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.554299 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.558782 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.558826 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.558845 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.559309 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.559349 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.559369 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.559393 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.559409 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.559432 2993 log.go:181] (0xc000c400a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.209.104:80/\nI0907 09:04:43.564363 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.564394 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.564428 2993 log.go:181] (0xc000c40000) (3) Data frame sent\nI0907 09:04:43.564741 2993 log.go:181] (0xc000eb8f20) Data frame received for 3\nI0907 09:04:43.564809 2993 log.go:181] (0xc000c40000) (3) Data frame handling\nI0907 09:04:43.565210 2993 log.go:181] (0xc000eb8f20) Data frame received for 5\nI0907 09:04:43.565237 2993 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0907 09:04:43.566917 2993 log.go:181] (0xc000eb8f20) Data frame received for 1\nI0907 09:04:43.566946 2993 log.go:181] (0xc0000e6820) (1) Data frame handling\nI0907 09:04:43.566963 2993 log.go:181] (0xc0000e6820) (1) Data frame sent\nI0907 09:04:43.566986 2993 log.go:181] (0xc000eb8f20) (0xc0000e6820) Stream removed, broadcasting: 1\nI0907 09:04:43.567010 2993 log.go:181] (0xc000eb8f20) Go away received\nI0907 09:04:43.567409 2993 log.go:181] (0xc000eb8f20) (0xc0000e6820) Stream removed, broadcasting: 1\nI0907 09:04:43.567434 2993 log.go:181] (0xc000eb8f20) (0xc000c40000) Stream removed, broadcasting: 3\nI0907 09:04:43.567450 2993 log.go:181] (0xc000eb8f20) (0xc000c400a0) Stream removed, broadcasting: 5\n" Sep 7 09:04:43.571: INFO: stdout: "\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp\naffinity-clusterip-8kvkp" Sep 7 09:04:43.571: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Received response from host: affinity-clusterip-8kvkp Sep 7 09:04:43.572: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4154, will wait for the garbage collector to delete the pods Sep 7 09:04:43.677: INFO: Deleting ReplicationController affinity-clusterip took: 6.200149ms Sep 7 09:04:44.177: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.220276ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:04:52.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4154" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.811 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":257,"skipped":3991,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:04:52.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 7 09:04:57.566: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:04:58.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8026" for this suite. • [SLOW TEST:6.291 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":258,"skipped":4002,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:04:58.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 7 09:04:58.754: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:05:11.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7294" for this suite. • [SLOW TEST:13.316 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4010,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:05:11.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-6648/configmap-test-71b10705-fcbb-4958-84c9-e77b907b20ce STEP: Creating a pod to test consume configMaps Sep 7 09:05:12.000: INFO: Waiting up to 5m0s for pod "pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88" in namespace "configmap-6648" to be "Succeeded or Failed" Sep 7 09:05:12.004: INFO: Pod "pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689963ms Sep 7 09:05:14.009: INFO: Pod "pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008415967s Sep 7 09:05:16.014: INFO: Pod "pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013656409s STEP: Saw pod success Sep 7 09:05:16.014: INFO: Pod "pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88" satisfied condition "Succeeded or Failed" Sep 7 09:05:16.078: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88 container env-test: STEP: delete the pod Sep 7 09:05:16.095: INFO: Waiting for pod pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88 to disappear Sep 7 09:05:16.100: INFO: Pod pod-configmaps-0eb88171-a983-43ad-a948-c217bb34da88 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:05:16.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6648" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4014,"failed":0} SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:05:16.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 7 09:05:20.748: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2fa161f0-966f-4e87-bc7e-3a22737999a7" Sep 7 09:05:20.748: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2fa161f0-966f-4e87-bc7e-3a22737999a7" in namespace "pods-2368" to be "terminated due to deadline exceeded" Sep 7 09:05:20.753: INFO: Pod "pod-update-activedeadlineseconds-2fa161f0-966f-4e87-bc7e-3a22737999a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.5774ms Sep 7 09:05:22.875: INFO: Pod "pod-update-activedeadlineseconds-2fa161f0-966f-4e87-bc7e-3a22737999a7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.126397014s Sep 7 09:05:22.875: INFO: Pod "pod-update-activedeadlineseconds-2fa161f0-966f-4e87-bc7e-3a22737999a7" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:05:22.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2368" for this suite. • [SLOW TEST:6.779 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":261,"skipped":4017,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:05:22.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 09:05:23.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5" in namespace "downward-api-8538" to be "Succeeded or Failed" Sep 7 09:05:23.424: INFO: Pod "downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.430911ms Sep 7 09:05:25.427: INFO: Pod "downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006495909s Sep 7 09:05:27.431: INFO: Pod "downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010821058s STEP: Saw pod success Sep 7 09:05:27.431: INFO: Pod "downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5" satisfied condition "Succeeded or Failed" Sep 7 09:05:27.434: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5 container client-container: STEP: delete the pod Sep 7 09:05:27.786: INFO: Waiting for pod downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5 to disappear Sep 7 09:05:27.789: INFO: Pod downwardapi-volume-71864f1b-dbf0-4725-bbf3-beda95f212c5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:05:27.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8538" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4024,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:05:27.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4870 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 7 09:05:27.968: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 7 09:05:28.091: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 09:05:30.222: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 09:05:32.094: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 09:05:34.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:36.120: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:38.095: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:40.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:42.107: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:44.095: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:46.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:05:48.097: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 7 09:05:48.103: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 7 09:05:52.191: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=udp&host=10.244.2.183&port=8081&tries=1'] Namespace:pod-network-test-4870 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 09:05:52.191: INFO: >>> kubeConfig: /root/.kube/config I0907 09:05:52.229397 7 log.go:181] (0xc0006e3ad0) (0xc0020c5180) Create stream I0907 09:05:52.229440 7 log.go:181] (0xc0006e3ad0) (0xc0020c5180) Stream added, broadcasting: 1 I0907 09:05:52.231374 7 log.go:181] (0xc0006e3ad0) Reply frame received for 1 I0907 09:05:52.231420 7 log.go:181] (0xc0006e3ad0) (0xc0035d94a0) Create stream I0907 09:05:52.231430 7 log.go:181] (0xc0006e3ad0) (0xc0035d94a0) Stream added, broadcasting: 3 I0907 09:05:52.232540 7 log.go:181] (0xc0006e3ad0) Reply frame received for 3 I0907 09:05:52.232587 7 log.go:181] (0xc0006e3ad0) (0xc004755e00) Create stream I0907 09:05:52.232634 7 log.go:181] (0xc0006e3ad0) (0xc004755e00) Stream added, broadcasting: 5 I0907 09:05:52.233704 7 log.go:181] (0xc0006e3ad0) Reply frame received for 5 I0907 09:05:52.315064 7 log.go:181] (0xc0006e3ad0) Data frame received for 3 I0907 09:05:52.315112 7 log.go:181] (0xc0035d94a0) (3) Data frame handling I0907 09:05:52.315134 7 log.go:181] (0xc0035d94a0) (3) Data frame sent I0907 09:05:52.315894 7 log.go:181] (0xc0006e3ad0) Data frame received for 3 I0907 09:05:52.315938 7 log.go:181] (0xc0035d94a0) (3) Data frame handling I0907 09:05:52.315978 7 log.go:181] (0xc0006e3ad0) Data frame received for 5 I0907 09:05:52.316125 7 log.go:181] (0xc004755e00) (5) Data frame handling I0907 09:05:52.317409 7 log.go:181] (0xc0006e3ad0) Data frame received for 1 I0907 09:05:52.317444 7 log.go:181] (0xc0020c5180) (1) Data frame handling I0907 09:05:52.317466 7 log.go:181] (0xc0020c5180) (1) Data frame sent I0907 09:05:52.317489 7 log.go:181] (0xc0006e3ad0) (0xc0020c5180) Stream removed, broadcasting: 1 I0907 09:05:52.317520 7 log.go:181] (0xc0006e3ad0) Go away received I0907 09:05:52.317643 7 log.go:181] (0xc0006e3ad0) (0xc0020c5180) Stream removed, broadcasting: 1 I0907 09:05:52.317672 7 log.go:181] (0xc0006e3ad0) (0xc0035d94a0) Stream removed, broadcasting: 3 I0907 09:05:52.317684 7 log.go:181] (0xc0006e3ad0) (0xc004755e00) Stream removed, broadcasting: 5 Sep 7 09:05:52.317: INFO: Waiting for responses: map[] Sep 7 09:05:52.321: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=udp&host=10.244.1.161&port=8081&tries=1'] Namespace:pod-network-test-4870 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 09:05:52.321: INFO: >>> kubeConfig: /root/.kube/config I0907 09:05:52.357269 7 log.go:181] (0xc0014ba840) (0xc0035d9860) Create stream I0907 09:05:52.357340 7 log.go:181] (0xc0014ba840) (0xc0035d9860) Stream added, broadcasting: 1 I0907 09:05:52.360167 7 log.go:181] (0xc0014ba840) Reply frame received for 1 I0907 09:05:52.360211 7 log.go:181] (0xc0014ba840) (0xc003deeaa0) Create stream I0907 09:05:52.360224 7 log.go:181] (0xc0014ba840) (0xc003deeaa0) Stream added, broadcasting: 3 I0907 09:05:52.361709 7 log.go:181] (0xc0014ba840) Reply frame received for 3 I0907 09:05:52.361741 7 log.go:181] (0xc0014ba840) (0xc0035d9900) Create stream I0907 09:05:52.361750 7 log.go:181] (0xc0014ba840) (0xc0035d9900) Stream added, broadcasting: 5 I0907 09:05:52.362921 7 log.go:181] (0xc0014ba840) Reply frame received for 5 I0907 09:05:52.427519 7 log.go:181] (0xc0014ba840) Data frame received for 3 I0907 09:05:52.427545 7 log.go:181] (0xc003deeaa0) (3) Data frame handling I0907 09:05:52.427563 7 log.go:181] (0xc003deeaa0) (3) Data frame sent I0907 09:05:52.427979 7 log.go:181] (0xc0014ba840) Data frame received for 3 I0907 09:05:52.428088 7 log.go:181] (0xc0014ba840) Data frame received for 5 I0907 09:05:52.428134 7 log.go:181] (0xc0035d9900) (5) Data frame handling I0907 09:05:52.428158 7 log.go:181] (0xc003deeaa0) (3) Data frame handling I0907 09:05:52.429576 7 log.go:181] (0xc0014ba840) Data frame received for 1 I0907 09:05:52.429620 7 log.go:181] (0xc0035d9860) (1) Data frame handling I0907 09:05:52.429641 7 log.go:181] (0xc0035d9860) (1) Data frame sent I0907 09:05:52.429657 7 log.go:181] (0xc0014ba840) (0xc0035d9860) Stream removed, broadcasting: 1 I0907 09:05:52.429692 7 log.go:181] (0xc0014ba840) Go away received I0907 09:05:52.429811 7 log.go:181] (0xc0014ba840) (0xc0035d9860) Stream removed, broadcasting: 1 I0907 09:05:52.429839 7 log.go:181] (0xc0014ba840) (0xc003deeaa0) Stream removed, broadcasting: 3 I0907 09:05:52.429851 7 log.go:181] (0xc0014ba840) (0xc0035d9900) Stream removed, broadcasting: 5 Sep 7 09:05:52.429: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:05:52.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4870" for this suite. • [SLOW TEST:24.639 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4029,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:05:52.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2651 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 7 09:05:52.617: INFO: Found 0 stateful pods, waiting for 3 Sep 7 09:06:02.623: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:06:02.623: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:06:02.623: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 7 09:06:12.621: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:06:12.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:06:12.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 7 09:06:12.649: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 7 09:06:22.737: INFO: Updating stateful set ss2 Sep 7 09:06:22.793: INFO: Waiting for Pod statefulset-2651/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 7 09:06:33.630: INFO: Found 2 stateful pods, waiting for 3 Sep 7 09:06:43.637: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:06:43.637: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:06:43.637: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 7 09:06:43.662: INFO: Updating stateful set ss2 Sep 7 09:06:43.697: INFO: Waiting for Pod statefulset-2651/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:06:53.725: INFO: Updating stateful set ss2 Sep 7 09:06:53.841: INFO: Waiting for StatefulSet statefulset-2651/ss2 to complete update Sep 7 09:06:53.841: INFO: Waiting for Pod statefulset-2651/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:07:03.934: INFO: Waiting for StatefulSet statefulset-2651/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 7 09:07:13.850: INFO: Deleting all statefulset in ns statefulset-2651 Sep 7 09:07:13.853: INFO: Scaling statefulset ss2 to 0 Sep 7 09:07:33.888: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 09:07:33.890: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:07:33.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2651" for this suite. • [SLOW TEST:101.469 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":264,"skipped":4032,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:07:33.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 7 09:07:33.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6887' Sep 7 09:07:34.290: INFO: stderr: "" Sep 7 09:07:34.290: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 7 09:07:35.294: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 09:07:35.294: INFO: Found 0 / 1 Sep 7 09:07:36.343: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 09:07:36.343: INFO: Found 0 / 1 Sep 7 09:07:37.296: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 09:07:37.296: INFO: Found 0 / 1 Sep 7 09:07:38.295: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 09:07:38.295: INFO: Found 1 / 1 Sep 7 09:07:38.295: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 7 09:07:38.298: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 09:07:38.298: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 7 09:07:38.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config patch pod agnhost-primary-4vxt4 --namespace=kubectl-6887 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 7 09:07:38.418: INFO: stderr: "" Sep 7 09:07:38.418: INFO: stdout: "pod/agnhost-primary-4vxt4 patched\n" STEP: checking annotations Sep 7 09:07:38.433: INFO: Selector matched 1 pods for map[app:agnhost] Sep 7 09:07:38.433: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:07:38.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6887" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":265,"skipped":4045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:07:38.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:07:38.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7367" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":266,"skipped":4086,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:07:38.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 7 09:07:38.562: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 7 09:07:49.413: INFO: >>> kubeConfig: /root/.kube/config Sep 7 09:07:51.353: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:08:03.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1451" for this suite. • [SLOW TEST:24.624 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":267,"skipped":4097,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:08:03.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 7 09:10:03.787: INFO: Successfully updated pod "var-expansion-6e0aade4-f5e2-4a13-8b52-799c3e67d24f" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 7 09:10:05.836: INFO: Deleting pod "var-expansion-6e0aade4-f5e2-4a13-8b52-799c3e67d24f" in namespace "var-expansion-7816" Sep 7 09:10:05.842: INFO: Wait up to 5m0s for pod "var-expansion-6e0aade4-f5e2-4a13-8b52-799c3e67d24f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:10:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7816" for this suite. • [SLOW TEST:156.786 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":268,"skipped":4108,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:10:39.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2255 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-2255 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2255 Sep 7 09:10:40.052: INFO: Found 0 stateful pods, waiting for 1 Sep 7 09:10:50.058: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 7 09:10:50.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 09:10:53.235: INFO: stderr: "I0907 09:10:53.128191 3047 log.go:181] (0xc0001b2370) (0xc0002661e0) Create stream\nI0907 09:10:53.128248 3047 log.go:181] (0xc0001b2370) (0xc0002661e0) Stream added, broadcasting: 1\nI0907 09:10:53.131249 3047 log.go:181] (0xc0001b2370) Reply frame received for 1\nI0907 09:10:53.131300 3047 log.go:181] (0xc0001b2370) (0xc000266280) Create stream\nI0907 09:10:53.131322 3047 log.go:181] (0xc0001b2370) (0xc000266280) Stream added, broadcasting: 3\nI0907 09:10:53.133186 3047 log.go:181] (0xc0001b2370) Reply frame received for 3\nI0907 09:10:53.133232 3047 log.go:181] (0xc0001b2370) (0xc000d16000) Create stream\nI0907 09:10:53.133246 3047 log.go:181] (0xc0001b2370) (0xc000d16000) Stream added, broadcasting: 5\nI0907 09:10:53.134064 3047 log.go:181] (0xc0001b2370) Reply frame received for 5\nI0907 09:10:53.196092 3047 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:10:53.196136 3047 log.go:181] (0xc000d16000) (5) Data frame handling\nI0907 09:10:53.196199 3047 log.go:181] (0xc000d16000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 09:10:53.227662 3047 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:10:53.227704 3047 log.go:181] (0xc000266280) (3) Data frame handling\nI0907 09:10:53.227745 3047 log.go:181] (0xc000266280) (3) Data frame sent\nI0907 09:10:53.227766 3047 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:10:53.227782 3047 log.go:181] (0xc000266280) (3) Data frame handling\nI0907 09:10:53.227998 3047 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:10:53.228112 3047 log.go:181] (0xc000d16000) (5) Data frame handling\nI0907 09:10:53.230375 3047 log.go:181] (0xc0001b2370) Data frame received for 1\nI0907 09:10:53.230404 3047 log.go:181] (0xc0002661e0) (1) Data frame handling\nI0907 09:10:53.230448 3047 log.go:181] (0xc0002661e0) (1) Data frame sent\nI0907 09:10:53.230469 3047 log.go:181] (0xc0001b2370) (0xc0002661e0) Stream removed, broadcasting: 1\nI0907 09:10:53.230495 3047 log.go:181] (0xc0001b2370) Go away received\nI0907 09:10:53.230941 3047 log.go:181] (0xc0001b2370) (0xc0002661e0) Stream removed, broadcasting: 1\nI0907 09:10:53.230968 3047 log.go:181] (0xc0001b2370) (0xc000266280) Stream removed, broadcasting: 3\nI0907 09:10:53.230990 3047 log.go:181] (0xc0001b2370) (0xc000d16000) Stream removed, broadcasting: 5\n" Sep 7 09:10:53.236: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 09:10:53.236: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 09:10:53.240: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 7 09:11:03.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 7 09:11:03.245: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 09:11:03.291: INFO: POD NODE PHASE GRACE CONDITIONS Sep 7 09:11:03.291: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC }] Sep 7 09:11:03.291: INFO: Sep 7 09:11:03.291: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 7 09:11:04.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.964623357s Sep 7 09:11:05.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.849431426s Sep 7 09:11:06.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.845864634s Sep 7 09:11:07.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.759523101s Sep 7 09:11:08.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.754741229s Sep 7 09:11:09.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.733477198s Sep 7 09:11:10.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.728287461s Sep 7 09:11:11.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.723053031s Sep 7 09:11:12.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 718.398461ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2255 Sep 7 09:11:13.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 09:11:13.778: INFO: stderr: "I0907 09:11:13.706977 3065 log.go:181] (0xc0008bd340) (0xc000d168c0) Create stream\nI0907 09:11:13.707037 3065 log.go:181] (0xc0008bd340) (0xc000d168c0) Stream added, broadcasting: 1\nI0907 09:11:13.712687 3065 log.go:181] (0xc0008bd340) Reply frame received for 1\nI0907 09:11:13.712741 3065 log.go:181] (0xc0008bd340) (0xc000d16000) Create stream\nI0907 09:11:13.712756 3065 log.go:181] (0xc0008bd340) (0xc000d16000) Stream added, broadcasting: 3\nI0907 09:11:13.713641 3065 log.go:181] (0xc0008bd340) Reply frame received for 3\nI0907 09:11:13.713678 3065 log.go:181] (0xc0008bd340) (0xc0003086e0) Create stream\nI0907 09:11:13.713694 3065 log.go:181] (0xc0008bd340) (0xc0003086e0) Stream added, broadcasting: 5\nI0907 09:11:13.714586 3065 log.go:181] (0xc0008bd340) Reply frame received for 5\nI0907 09:11:13.770722 3065 log.go:181] (0xc0008bd340) Data frame received for 3\nI0907 09:11:13.770764 3065 log.go:181] (0xc000d16000) (3) Data frame handling\nI0907 09:11:13.770789 3065 log.go:181] (0xc000d16000) (3) Data frame sent\nI0907 09:11:13.770809 3065 log.go:181] (0xc0008bd340) Data frame received for 3\nI0907 09:11:13.770827 3065 log.go:181] (0xc000d16000) (3) Data frame handling\nI0907 09:11:13.770856 3065 log.go:181] (0xc0008bd340) Data frame received for 5\nI0907 09:11:13.770875 3065 log.go:181] (0xc0003086e0) (5) Data frame handling\nI0907 09:11:13.770917 3065 log.go:181] (0xc0003086e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 09:11:13.770981 3065 log.go:181] (0xc0008bd340) Data frame received for 5\nI0907 09:11:13.771005 3065 log.go:181] (0xc0003086e0) (5) Data frame handling\nI0907 09:11:13.772692 3065 log.go:181] (0xc0008bd340) Data frame received for 1\nI0907 09:11:13.772719 3065 log.go:181] (0xc000d168c0) (1) Data frame handling\nI0907 09:11:13.772732 3065 log.go:181] (0xc000d168c0) (1) Data frame sent\nI0907 09:11:13.772772 3065 log.go:181] (0xc0008bd340) (0xc000d168c0) Stream removed, broadcasting: 1\nI0907 09:11:13.772817 3065 log.go:181] (0xc0008bd340) Go away received\nI0907 09:11:13.773251 3065 log.go:181] (0xc0008bd340) (0xc000d168c0) Stream removed, broadcasting: 1\nI0907 09:11:13.773274 3065 log.go:181] (0xc0008bd340) (0xc000d16000) Stream removed, broadcasting: 3\nI0907 09:11:13.773285 3065 log.go:181] (0xc0008bd340) (0xc0003086e0) Stream removed, broadcasting: 5\n" Sep 7 09:11:13.778: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 09:11:13.778: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 09:11:13.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 09:11:13.970: INFO: stderr: "I0907 09:11:13.902212 3084 log.go:181] (0xc00018d6b0) (0xc00013c8c0) Create stream\nI0907 09:11:13.902273 3084 log.go:181] (0xc00018d6b0) (0xc00013c8c0) Stream added, broadcasting: 1\nI0907 09:11:13.905709 3084 log.go:181] (0xc00018d6b0) Reply frame received for 1\nI0907 09:11:13.905827 3084 log.go:181] (0xc00018d6b0) (0xc000d48320) Create stream\nI0907 09:11:13.905889 3084 log.go:181] (0xc00018d6b0) (0xc000d48320) Stream added, broadcasting: 3\nI0907 09:11:13.907138 3084 log.go:181] (0xc00018d6b0) Reply frame received for 3\nI0907 09:11:13.907194 3084 log.go:181] (0xc00018d6b0) (0xc000d48000) Create stream\nI0907 09:11:13.907211 3084 log.go:181] (0xc00018d6b0) (0xc000d48000) Stream added, broadcasting: 5\nI0907 09:11:13.908262 3084 log.go:181] (0xc00018d6b0) Reply frame received for 5\nI0907 09:11:13.964275 3084 log.go:181] (0xc00018d6b0) Data frame received for 3\nI0907 09:11:13.964297 3084 log.go:181] (0xc000d48320) (3) Data frame handling\nI0907 09:11:13.964307 3084 log.go:181] (0xc000d48320) (3) Data frame sent\nI0907 09:11:13.964315 3084 log.go:181] (0xc00018d6b0) Data frame received for 3\nI0907 09:11:13.964327 3084 log.go:181] (0xc00018d6b0) Data frame received for 5\nI0907 09:11:13.964361 3084 log.go:181] (0xc000d48000) (5) Data frame handling\nI0907 09:11:13.964373 3084 log.go:181] (0xc000d48000) (5) Data frame sent\nI0907 09:11:13.964381 3084 log.go:181] (0xc00018d6b0) Data frame received for 5\nI0907 09:11:13.964386 3084 log.go:181] (0xc000d48000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0907 09:11:13.964400 3084 log.go:181] (0xc000d48320) (3) Data frame handling\nI0907 09:11:13.966091 3084 log.go:181] (0xc00018d6b0) Data frame received for 1\nI0907 09:11:13.966103 3084 log.go:181] (0xc00013c8c0) (1) Data frame handling\nI0907 09:11:13.966119 3084 log.go:181] (0xc00013c8c0) (1) Data frame sent\nI0907 09:11:13.966132 3084 log.go:181] (0xc00018d6b0) (0xc00013c8c0) Stream removed, broadcasting: 1\nI0907 09:11:13.966304 3084 log.go:181] (0xc00018d6b0) Go away received\nI0907 09:11:13.966426 3084 log.go:181] (0xc00018d6b0) (0xc00013c8c0) Stream removed, broadcasting: 1\nI0907 09:11:13.966437 3084 log.go:181] (0xc00018d6b0) (0xc000d48320) Stream removed, broadcasting: 3\nI0907 09:11:13.966442 3084 log.go:181] (0xc00018d6b0) (0xc000d48000) Stream removed, broadcasting: 5\n" Sep 7 09:11:13.970: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 09:11:13.970: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 09:11:13.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 09:11:14.188: INFO: stderr: "I0907 09:11:14.109645 3102 log.go:181] (0xc000142370) (0xc0007972c0) Create stream\nI0907 09:11:14.109709 3102 log.go:181] (0xc000142370) (0xc0007972c0) Stream added, broadcasting: 1\nI0907 09:11:14.111926 3102 log.go:181] (0xc000142370) Reply frame received for 1\nI0907 09:11:14.111992 3102 log.go:181] (0xc000142370) (0xc000797360) Create stream\nI0907 09:11:14.112122 3102 log.go:181] (0xc000142370) (0xc000797360) Stream added, broadcasting: 3\nI0907 09:11:14.113160 3102 log.go:181] (0xc000142370) Reply frame received for 3\nI0907 09:11:14.113188 3102 log.go:181] (0xc000142370) (0xc000c65f40) Create stream\nI0907 09:11:14.113196 3102 log.go:181] (0xc000142370) (0xc000c65f40) Stream added, broadcasting: 5\nI0907 09:11:14.114046 3102 log.go:181] (0xc000142370) Reply frame received for 5\nI0907 09:11:14.180880 3102 log.go:181] (0xc000142370) Data frame received for 5\nI0907 09:11:14.180921 3102 log.go:181] (0xc000c65f40) (5) Data frame handling\nI0907 09:11:14.180937 3102 log.go:181] (0xc000c65f40) (5) Data frame sent\nI0907 09:11:14.180948 3102 log.go:181] (0xc000142370) Data frame received for 5\nI0907 09:11:14.180958 3102 log.go:181] (0xc000c65f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0907 09:11:14.180990 3102 log.go:181] (0xc000142370) Data frame received for 3\nI0907 09:11:14.181004 3102 log.go:181] (0xc000797360) (3) Data frame handling\nI0907 09:11:14.181022 3102 log.go:181] (0xc000797360) (3) Data frame sent\nI0907 09:11:14.181036 3102 log.go:181] (0xc000142370) Data frame received for 3\nI0907 09:11:14.181048 3102 log.go:181] (0xc000797360) (3) Data frame handling\nI0907 09:11:14.182542 3102 log.go:181] (0xc000142370) Data frame received for 1\nI0907 09:11:14.182557 3102 log.go:181] (0xc0007972c0) (1) Data frame handling\nI0907 09:11:14.182564 3102 log.go:181] (0xc0007972c0) (1) Data frame sent\nI0907 09:11:14.182705 3102 log.go:181] (0xc000142370) (0xc0007972c0) Stream removed, broadcasting: 1\nI0907 09:11:14.182765 3102 log.go:181] (0xc000142370) Go away received\nI0907 09:11:14.183241 3102 log.go:181] (0xc000142370) (0xc0007972c0) Stream removed, broadcasting: 1\nI0907 09:11:14.183263 3102 log.go:181] (0xc000142370) (0xc000797360) Stream removed, broadcasting: 3\nI0907 09:11:14.183274 3102 log.go:181] (0xc000142370) (0xc000c65f40) Stream removed, broadcasting: 5\n" Sep 7 09:11:14.188: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 09:11:14.188: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 09:11:14.193: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Sep 7 09:11:24.198: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:11:24.198: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:11:24.198: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 7 09:11:24.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 09:11:24.451: INFO: stderr: "I0907 09:11:24.337608 3121 log.go:181] (0xc00074b4a0) (0xc0006a2a00) Create stream\nI0907 09:11:24.337678 3121 log.go:181] (0xc00074b4a0) (0xc0006a2a00) Stream added, broadcasting: 1\nI0907 09:11:24.342592 3121 log.go:181] (0xc00074b4a0) Reply frame received for 1\nI0907 09:11:24.342640 3121 log.go:181] (0xc00074b4a0) (0xc0006a2000) Create stream\nI0907 09:11:24.342657 3121 log.go:181] (0xc00074b4a0) (0xc0006a2000) Stream added, broadcasting: 3\nI0907 09:11:24.348423 3121 log.go:181] (0xc00074b4a0) Reply frame received for 3\nI0907 09:11:24.348479 3121 log.go:181] (0xc00074b4a0) (0xc0008a8140) Create stream\nI0907 09:11:24.348503 3121 log.go:181] (0xc00074b4a0) (0xc0008a8140) Stream added, broadcasting: 5\nI0907 09:11:24.349579 3121 log.go:181] (0xc00074b4a0) Reply frame received for 5\nI0907 09:11:24.443763 3121 log.go:181] (0xc00074b4a0) Data frame received for 3\nI0907 09:11:24.443798 3121 log.go:181] (0xc0006a2000) (3) Data frame handling\nI0907 09:11:24.443809 3121 log.go:181] (0xc0006a2000) (3) Data frame sent\nI0907 09:11:24.443815 3121 log.go:181] (0xc00074b4a0) Data frame received for 3\nI0907 09:11:24.443821 3121 log.go:181] (0xc0006a2000) (3) Data frame handling\nI0907 09:11:24.443833 3121 log.go:181] (0xc00074b4a0) Data frame received for 5\nI0907 09:11:24.443845 3121 log.go:181] (0xc0008a8140) (5) Data frame handling\nI0907 09:11:24.443859 3121 log.go:181] (0xc0008a8140) (5) Data frame sent\nI0907 09:11:24.443869 3121 log.go:181] (0xc00074b4a0) Data frame received for 5\nI0907 09:11:24.443874 3121 log.go:181] (0xc0008a8140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 09:11:24.445867 3121 log.go:181] (0xc00074b4a0) Data frame received for 1\nI0907 09:11:24.445896 3121 log.go:181] (0xc0006a2a00) (1) Data frame handling\nI0907 09:11:24.445909 3121 log.go:181] (0xc0006a2a00) (1) Data frame sent\nI0907 09:11:24.445924 3121 log.go:181] (0xc00074b4a0) (0xc0006a2a00) Stream removed, broadcasting: 1\nI0907 09:11:24.445951 3121 log.go:181] (0xc00074b4a0) Go away received\nI0907 09:11:24.446360 3121 log.go:181] (0xc00074b4a0) (0xc0006a2a00) Stream removed, broadcasting: 1\nI0907 09:11:24.446373 3121 log.go:181] (0xc00074b4a0) (0xc0006a2000) Stream removed, broadcasting: 3\nI0907 09:11:24.446380 3121 log.go:181] (0xc00074b4a0) (0xc0008a8140) Stream removed, broadcasting: 5\n" Sep 7 09:11:24.451: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 09:11:24.451: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 09:11:24.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 09:11:24.720: INFO: stderr: "I0907 09:11:24.616271 3139 log.go:181] (0xc0004fd3f0) (0xc0004f2a00) Create stream\nI0907 09:11:24.616331 3139 log.go:181] (0xc0004fd3f0) (0xc0004f2a00) Stream added, broadcasting: 1\nI0907 09:11:24.618662 3139 log.go:181] (0xc0004fd3f0) Reply frame received for 1\nI0907 09:11:24.618695 3139 log.go:181] (0xc0004fd3f0) (0xc0009b00a0) Create stream\nI0907 09:11:24.618704 3139 log.go:181] (0xc0004fd3f0) (0xc0009b00a0) Stream added, broadcasting: 3\nI0907 09:11:24.619694 3139 log.go:181] (0xc0004fd3f0) Reply frame received for 3\nI0907 09:11:24.619729 3139 log.go:181] (0xc0004fd3f0) (0xc000b34280) Create stream\nI0907 09:11:24.620102 3139 log.go:181] (0xc0004fd3f0) (0xc000b34280) Stream added, broadcasting: 5\nI0907 09:11:24.622389 3139 log.go:181] (0xc0004fd3f0) Reply frame received for 5\nI0907 09:11:24.681907 3139 log.go:181] (0xc0004fd3f0) Data frame received for 5\nI0907 09:11:24.681937 3139 log.go:181] (0xc000b34280) (5) Data frame handling\nI0907 09:11:24.681957 3139 log.go:181] (0xc000b34280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 09:11:24.713129 3139 log.go:181] (0xc0004fd3f0) Data frame received for 3\nI0907 09:11:24.713163 3139 log.go:181] (0xc0009b00a0) (3) Data frame handling\nI0907 09:11:24.713197 3139 log.go:181] (0xc0009b00a0) (3) Data frame sent\nI0907 09:11:24.713222 3139 log.go:181] (0xc0004fd3f0) Data frame received for 3\nI0907 09:11:24.713234 3139 log.go:181] (0xc0009b00a0) (3) Data frame handling\nI0907 09:11:24.713259 3139 log.go:181] (0xc0004fd3f0) Data frame received for 5\nI0907 09:11:24.713289 3139 log.go:181] (0xc000b34280) (5) Data frame handling\nI0907 09:11:24.715440 3139 log.go:181] (0xc0004fd3f0) Data frame received for 1\nI0907 09:11:24.715470 3139 log.go:181] (0xc0004f2a00) (1) Data frame handling\nI0907 09:11:24.715483 3139 log.go:181] (0xc0004f2a00) (1) Data frame sent\nI0907 09:11:24.715504 3139 log.go:181] (0xc0004fd3f0) (0xc0004f2a00) Stream removed, broadcasting: 1\nI0907 09:11:24.715909 3139 log.go:181] (0xc0004fd3f0) (0xc0004f2a00) Stream removed, broadcasting: 1\nI0907 09:11:24.715937 3139 log.go:181] (0xc0004fd3f0) (0xc0009b00a0) Stream removed, broadcasting: 3\nI0907 09:11:24.715951 3139 log.go:181] (0xc0004fd3f0) (0xc000b34280) Stream removed, broadcasting: 5\n" Sep 7 09:11:24.720: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 09:11:24.720: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 09:11:24.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2255 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 09:11:24.972: INFO: stderr: "I0907 09:11:24.857515 3158 log.go:181] (0xc000c5b760) (0xc000c528c0) Create stream\nI0907 09:11:24.857574 3158 log.go:181] (0xc000c5b760) (0xc000c528c0) Stream added, broadcasting: 1\nI0907 09:11:24.862824 3158 log.go:181] (0xc000c5b760) Reply frame received for 1\nI0907 09:11:24.862878 3158 log.go:181] (0xc000c5b760) (0xc000c52000) Create stream\nI0907 09:11:24.862896 3158 log.go:181] (0xc000c5b760) (0xc000c52000) Stream added, broadcasting: 3\nI0907 09:11:24.864151 3158 log.go:181] (0xc000c5b760) Reply frame received for 3\nI0907 09:11:24.864224 3158 log.go:181] (0xc000c5b760) (0xc000de80a0) Create stream\nI0907 09:11:24.864250 3158 log.go:181] (0xc000c5b760) (0xc000de80a0) Stream added, broadcasting: 5\nI0907 09:11:24.865611 3158 log.go:181] (0xc000c5b760) Reply frame received for 5\nI0907 09:11:24.906048 3158 log.go:181] (0xc000c5b760) Data frame received for 5\nI0907 09:11:24.906082 3158 log.go:181] (0xc000de80a0) (5) Data frame handling\nI0907 09:11:24.906101 3158 log.go:181] (0xc000de80a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 09:11:24.965192 3158 log.go:181] (0xc000c5b760) Data frame received for 3\nI0907 09:11:24.965223 3158 log.go:181] (0xc000c52000) (3) Data frame handling\nI0907 09:11:24.965235 3158 log.go:181] (0xc000c52000) (3) Data frame sent\nI0907 09:11:24.965242 3158 log.go:181] (0xc000c5b760) Data frame received for 3\nI0907 09:11:24.965248 3158 log.go:181] (0xc000c52000) (3) Data frame handling\nI0907 09:11:24.965316 3158 log.go:181] (0xc000c5b760) Data frame received for 5\nI0907 09:11:24.965340 3158 log.go:181] (0xc000de80a0) (5) Data frame handling\nI0907 09:11:24.967299 3158 log.go:181] (0xc000c5b760) Data frame received for 1\nI0907 09:11:24.967425 3158 log.go:181] (0xc000c528c0) (1) Data frame handling\nI0907 09:11:24.967481 3158 log.go:181] (0xc000c528c0) (1) Data frame sent\nI0907 09:11:24.967512 3158 log.go:181] (0xc000c5b760) (0xc000c528c0) Stream removed, broadcasting: 1\nI0907 09:11:24.967559 3158 log.go:181] (0xc000c5b760) Go away received\nI0907 09:11:24.968229 3158 log.go:181] (0xc000c5b760) (0xc000c528c0) Stream removed, broadcasting: 1\nI0907 09:11:24.968257 3158 log.go:181] (0xc000c5b760) (0xc000c52000) Stream removed, broadcasting: 3\nI0907 09:11:24.968277 3158 log.go:181] (0xc000c5b760) (0xc000de80a0) Stream removed, broadcasting: 5\n" Sep 7 09:11:24.972: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 09:11:24.972: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 09:11:24.972: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 09:11:25.065: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Sep 7 09:11:35.074: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 7 09:11:35.074: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 7 09:11:35.074: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 7 09:11:35.084: INFO: POD NODE PHASE GRACE CONDITIONS Sep 7 09:11:35.084: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC }] Sep 7 09:11:35.084: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:35.084: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:35.084: INFO: Sep 7 09:11:35.084: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 7 09:11:36.186: INFO: POD NODE PHASE GRACE CONDITIONS Sep 7 09:11:36.186: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC }] Sep 7 09:11:36.186: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:36.186: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:36.186: INFO: Sep 7 09:11:36.186: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 7 09:11:37.192: INFO: POD NODE PHASE GRACE CONDITIONS Sep 7 09:11:37.192: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC }] Sep 7 09:11:37.192: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:37.192: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:37.192: INFO: Sep 7 09:11:37.192: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 7 09:11:38.198: INFO: POD NODE PHASE GRACE CONDITIONS Sep 7 09:11:38.198: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:10:40 +0000 UTC }] Sep 7 09:11:38.198: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:38.198: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-07 09:11:03 +0000 UTC }] Sep 7 09:11:38.198: INFO: Sep 7 09:11:38.198: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 7 09:11:39.202: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.883058673s Sep 7 09:11:40.207: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.878442276s Sep 7 09:11:41.210: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.873620052s Sep 7 09:11:42.214: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.870384982s Sep 7 09:11:43.220: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.866370359s Sep 7 09:11:44.225: INFO: Verifying statefulset ss doesn't scale past 0 for another 860.397247ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2255 Sep 7 09:11:45.230: INFO: Scaling statefulset ss to 0 Sep 7 09:11:45.242: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 7 09:11:45.244: INFO: Deleting all statefulset in ns statefulset-2255 Sep 7 09:11:45.247: INFO: Scaling statefulset ss to 0 Sep 7 09:11:45.254: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 09:11:45.256: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:11:45.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2255" for this suite. • [SLOW TEST:65.384 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":269,"skipped":4119,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:11:45.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8f92a9bd-251a-4f0f-9e43-1b9316e0bafb STEP: Creating a pod to test consume configMaps Sep 7 09:11:45.421: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1" in namespace "projected-2666" to be "Succeeded or Failed" Sep 7 09:11:45.425: INFO: Pod "pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450027ms Sep 7 09:11:47.429: INFO: Pod "pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007585779s Sep 7 09:11:49.433: INFO: Pod "pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011887589s STEP: Saw pod success Sep 7 09:11:49.433: INFO: Pod "pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1" satisfied condition "Succeeded or Failed" Sep 7 09:11:49.436: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1 container projected-configmap-volume-test: STEP: delete the pod Sep 7 09:11:49.483: INFO: Waiting for pod pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1 to disappear Sep 7 09:11:49.490: INFO: Pod pod-projected-configmaps-b01b1753-6ff6-4dc7-971d-99ae2444f6c1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:11:49.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2666" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4127,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:11:49.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-3a961645-f058-486f-91d8-7d963995d3dd STEP: Creating a pod to test consume secrets Sep 7 09:11:49.781: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552" in namespace "projected-5641" to be "Succeeded or Failed" Sep 7 09:11:49.826: INFO: Pod "pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552": Phase="Pending", Reason="", readiness=false. Elapsed: 44.543477ms Sep 7 09:11:51.830: INFO: Pod "pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048471053s Sep 7 09:11:53.835: INFO: Pod "pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552": Phase="Running", Reason="", readiness=true. Elapsed: 4.05366211s Sep 7 09:11:55.839: INFO: Pod "pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057768044s STEP: Saw pod success Sep 7 09:11:55.839: INFO: Pod "pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552" satisfied condition "Succeeded or Failed" Sep 7 09:11:55.842: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552 container secret-volume-test: STEP: delete the pod Sep 7 09:11:55.887: INFO: Waiting for pod pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552 to disappear Sep 7 09:11:55.907: INFO: Pod pod-projected-secrets-11e9529f-a5ca-4a1f-9f2d-fa9c3f03b552 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:11:55.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5641" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4143,"failed":0} [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:11:55.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8262.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8262.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8262.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 09:12:02.169: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.172: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.175: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.177: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.185: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.188: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.190: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.193: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:02.198: INFO: Lookups using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local] Sep 7 09:12:07.203: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.207: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.211: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.214: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.224: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.228: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.231: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.234: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:07.240: INFO: Lookups using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local] Sep 7 09:12:12.203: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.207: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.211: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.214: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.225: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.228: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.232: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.235: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:12.242: INFO: Lookups using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local] Sep 7 09:12:17.203: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.207: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.210: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.213: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.223: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.225: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.228: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.231: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:17.238: INFO: Lookups using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local] Sep 7 09:12:22.203: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.208: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.211: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.214: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.224: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.228: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.231: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.234: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:22.241: INFO: Lookups using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local] Sep 7 09:12:27.203: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.206: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.209: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.212: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.221: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.224: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.227: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.230: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local from pod dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a: the server could not find the requested resource (get pods dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a) Sep 7 09:12:27.237: INFO: Lookups using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8262.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8262.svc.cluster.local jessie_udp@dns-test-service-2.dns-8262.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8262.svc.cluster.local] Sep 7 09:12:32.239: INFO: DNS probes using dns-8262/dns-test-939e7153-47d4-4522-999b-62ac19f4fb8a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:12:33.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8262" for this suite. • [SLOW TEST:37.209 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":272,"skipped":4143,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:12:33.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9455 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 7 09:12:33.251: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 7 09:12:33.344: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 09:12:35.623: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 09:12:37.350: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 7 09:12:39.350: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:41.348: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:43.349: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:45.349: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:47.351: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:49.353: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:51.352: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:53.350: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:55.349: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 7 09:12:57.352: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 7 09:12:57.357: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 7 09:13:03.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.192:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9455 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 09:13:03.447: INFO: >>> kubeConfig: /root/.kube/config I0907 09:13:03.482951 7 log.go:181] (0xc000642580) (0xc000e69900) Create stream I0907 09:13:03.482982 7 log.go:181] (0xc000642580) (0xc000e69900) Stream added, broadcasting: 1 I0907 09:13:03.485154 7 log.go:181] (0xc000642580) Reply frame received for 1 I0907 09:13:03.485187 7 log.go:181] (0xc000642580) (0xc000ee8280) Create stream I0907 09:13:03.485202 7 log.go:181] (0xc000642580) (0xc000ee8280) Stream added, broadcasting: 3 I0907 09:13:03.486297 7 log.go:181] (0xc000642580) Reply frame received for 3 I0907 09:13:03.486338 7 log.go:181] (0xc000642580) (0xc0016c9ea0) Create stream I0907 09:13:03.486354 7 log.go:181] (0xc000642580) (0xc0016c9ea0) Stream added, broadcasting: 5 I0907 09:13:03.487511 7 log.go:181] (0xc000642580) Reply frame received for 5 I0907 09:13:03.547138 7 log.go:181] (0xc000642580) Data frame received for 3 I0907 09:13:03.547180 7 log.go:181] (0xc000ee8280) (3) Data frame handling I0907 09:13:03.547213 7 log.go:181] (0xc000ee8280) (3) Data frame sent I0907 09:13:03.547357 7 log.go:181] (0xc000642580) Data frame received for 3 I0907 09:13:03.547384 7 log.go:181] (0xc000ee8280) (3) Data frame handling I0907 09:13:03.547603 7 log.go:181] (0xc000642580) Data frame received for 5 I0907 09:13:03.547635 7 log.go:181] (0xc0016c9ea0) (5) Data frame handling I0907 09:13:03.549684 7 log.go:181] (0xc000642580) Data frame received for 1 I0907 09:13:03.549701 7 log.go:181] (0xc000e69900) (1) Data frame handling I0907 09:13:03.549726 7 log.go:181] (0xc000e69900) (1) Data frame sent I0907 09:13:03.549950 7 log.go:181] (0xc000642580) (0xc000e69900) Stream removed, broadcasting: 1 I0907 09:13:03.549977 7 log.go:181] (0xc000642580) Go away received I0907 09:13:03.550088 7 log.go:181] (0xc000642580) (0xc000e69900) Stream removed, broadcasting: 1 I0907 09:13:03.550126 7 log.go:181] (0xc000642580) (0xc000ee8280) Stream removed, broadcasting: 3 I0907 09:13:03.550150 7 log.go:181] (0xc000642580) (0xc0016c9ea0) Stream removed, broadcasting: 5 Sep 7 09:13:03.550: INFO: Found all expected endpoints: [netserver-0] Sep 7 09:13:03.553: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.171:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9455 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 7 09:13:03.553: INFO: >>> kubeConfig: /root/.kube/config I0907 09:13:03.582511 7 log.go:181] (0xc000642bb0) (0xc000e69e00) Create stream I0907 09:13:03.582541 7 log.go:181] (0xc000642bb0) (0xc000e69e00) Stream added, broadcasting: 1 I0907 09:13:03.586143 7 log.go:181] (0xc000642bb0) Reply frame received for 1 I0907 09:13:03.586323 7 log.go:181] (0xc000642bb0) (0xc003ea1860) Create stream I0907 09:13:03.586363 7 log.go:181] (0xc000642bb0) (0xc003ea1860) Stream added, broadcasting: 3 I0907 09:13:03.588845 7 log.go:181] (0xc000642bb0) Reply frame received for 3 I0907 09:13:03.588897 7 log.go:181] (0xc000642bb0) (0xc0016c9f40) Create stream I0907 09:13:03.588923 7 log.go:181] (0xc000642bb0) (0xc0016c9f40) Stream added, broadcasting: 5 I0907 09:13:03.589995 7 log.go:181] (0xc000642bb0) Reply frame received for 5 I0907 09:13:03.670456 7 log.go:181] (0xc000642bb0) Data frame received for 5 I0907 09:13:03.670513 7 log.go:181] (0xc0016c9f40) (5) Data frame handling I0907 09:13:03.670548 7 log.go:181] (0xc000642bb0) Data frame received for 3 I0907 09:13:03.670571 7 log.go:181] (0xc003ea1860) (3) Data frame handling I0907 09:13:03.670603 7 log.go:181] (0xc003ea1860) (3) Data frame sent I0907 09:13:03.670626 7 log.go:181] (0xc000642bb0) Data frame received for 3 I0907 09:13:03.670647 7 log.go:181] (0xc003ea1860) (3) Data frame handling I0907 09:13:03.672514 7 log.go:181] (0xc000642bb0) Data frame received for 1 I0907 09:13:03.672541 7 log.go:181] (0xc000e69e00) (1) Data frame handling I0907 09:13:03.672554 7 log.go:181] (0xc000e69e00) (1) Data frame sent I0907 09:13:03.672576 7 log.go:181] (0xc000642bb0) (0xc000e69e00) Stream removed, broadcasting: 1 I0907 09:13:03.672599 7 log.go:181] (0xc000642bb0) Go away received I0907 09:13:03.672718 7 log.go:181] (0xc000642bb0) (0xc000e69e00) Stream removed, broadcasting: 1 I0907 09:13:03.672759 7 log.go:181] (0xc000642bb0) (0xc003ea1860) Stream removed, broadcasting: 3 I0907 09:13:03.672785 7 log.go:181] (0xc000642bb0) (0xc0016c9f40) Stream removed, broadcasting: 5 Sep 7 09:13:03.672: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:13:03.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9455" for this suite. • [SLOW TEST:30.558 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:13:03.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-50aa6774-e0cc-41de-b929-fd361464066c STEP: Creating configMap with name cm-test-opt-upd-7b082316-7bc7-40ed-bade-8bceb74419a7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-50aa6774-e0cc-41de-b929-fd361464066c STEP: Updating configmap cm-test-opt-upd-7b082316-7bc7-40ed-bade-8bceb74419a7 STEP: Creating configMap with name cm-test-opt-create-14a73def-7ca2-46aa-b12f-720862c07a52 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:13:11.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1522" for this suite. • [SLOW TEST:8.219 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4192,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:13:11.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:13:11.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2526" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":275,"skipped":4193,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:13:11.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 7 09:13:12.046: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:13:13.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4327" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":276,"skipped":4193,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:13:13.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-1098fc91-58c9-48ed-92d6-5acbf491b7f8 STEP: Creating configMap with name cm-test-opt-upd-e56c4897-6f44-48c9-86d1-08abe4b4bfca STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1098fc91-58c9-48ed-92d6-5acbf491b7f8 STEP: Updating configmap cm-test-opt-upd-e56c4897-6f44-48c9-86d1-08abe4b4bfca STEP: Creating configMap with name cm-test-opt-create-3a1d95f5-df43-4ddb-81b6-d021e20c22cb STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:14:31.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4364" for this suite. • [SLOW TEST:78.611 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4200,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:14:31.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 09:14:32.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a" in namespace "projected-8592" to be "Succeeded or Failed" Sep 7 09:14:32.007: INFO: Pod "downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08683ms Sep 7 09:14:34.012: INFO: Pod "downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008437702s Sep 7 09:14:36.016: INFO: Pod "downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a": Phase="Running", Reason="", readiness=true. Elapsed: 4.012353431s Sep 7 09:14:38.021: INFO: Pod "downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016954094s STEP: Saw pod success Sep 7 09:14:38.021: INFO: Pod "downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a" satisfied condition "Succeeded or Failed" Sep 7 09:14:38.024: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a container client-container: STEP: delete the pod Sep 7 09:14:38.063: INFO: Waiting for pod downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a to disappear Sep 7 09:14:38.157: INFO: Pod downwardapi-volume-9d705074-7e49-4c2e-9ee8-f42edbaabc9a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:14:38.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8592" for this suite. • [SLOW TEST:6.290 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":278,"skipped":4208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:14:38.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 7 09:14:38.490: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:38.588: INFO: Number of nodes with available pods: 0 Sep 7 09:14:38.588: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:39.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:39.597: INFO: Number of nodes with available pods: 0 Sep 7 09:14:39.597: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:40.722: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:40.765: INFO: Number of nodes with available pods: 0 Sep 7 09:14:40.765: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:41.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:41.660: INFO: Number of nodes with available pods: 0 Sep 7 09:14:41.660: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:42.609: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:42.613: INFO: Number of nodes with available pods: 1 Sep 7 09:14:42.613: INFO: Node latest-worker2 is running more than one daemon pod Sep 7 09:14:43.593: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:43.597: INFO: Number of nodes with available pods: 1 Sep 7 09:14:43.597: INFO: Node latest-worker2 is running more than one daemon pod Sep 7 09:14:44.594: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:44.598: INFO: Number of nodes with available pods: 2 Sep 7 09:14:44.598: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 7 09:14:44.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:44.666: INFO: Number of nodes with available pods: 1 Sep 7 09:14:44.666: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:45.673: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:45.676: INFO: Number of nodes with available pods: 1 Sep 7 09:14:45.676: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:46.691: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:46.694: INFO: Number of nodes with available pods: 1 Sep 7 09:14:46.694: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:47.671: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:47.675: INFO: Number of nodes with available pods: 1 Sep 7 09:14:47.675: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:48.673: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:48.677: INFO: Number of nodes with available pods: 1 Sep 7 09:14:48.677: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:49.671: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:49.674: INFO: Number of nodes with available pods: 1 Sep 7 09:14:49.674: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:50.672: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:50.676: INFO: Number of nodes with available pods: 1 Sep 7 09:14:50.676: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:51.671: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:51.675: INFO: Number of nodes with available pods: 1 Sep 7 09:14:51.675: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:52.673: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:52.677: INFO: Number of nodes with available pods: 1 Sep 7 09:14:52.677: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:53.710: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:53.713: INFO: Number of nodes with available pods: 1 Sep 7 09:14:53.713: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:54.685: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:54.688: INFO: Number of nodes with available pods: 1 Sep 7 09:14:54.688: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:14:55.672: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:14:55.675: INFO: Number of nodes with available pods: 2 Sep 7 09:14:55.675: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1786, will wait for the garbage collector to delete the pods Sep 7 09:14:55.738: INFO: Deleting DaemonSet.extensions daemon-set took: 7.175907ms Sep 7 09:14:56.238: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.23349ms Sep 7 09:15:02.242: INFO: Number of nodes with available pods: 0 Sep 7 09:15:02.242: INFO: Number of running nodes: 0, number of available pods: 0 Sep 7 09:15:02.245: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1786/daemonsets","resourceVersion":"300546"},"items":null} Sep 7 09:15:02.277: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1786/pods","resourceVersion":"300546"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:02.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1786" for this suite. • [SLOW TEST:24.132 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":279,"skipped":4280,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:02.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 7 09:15:02.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config create -f -' Sep 7 09:15:02.679: INFO: stderr: "" Sep 7 09:15:02.679: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 7 09:15:02.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config diff -f -' Sep 7 09:15:03.181: INFO: rc: 1 Sep 7 09:15:03.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config delete -f -' Sep 7 09:15:03.306: INFO: stderr: "" Sep 7 09:15:03.306: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:03.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2663" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":280,"skipped":4287,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:03.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:09.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1644" for this suite. STEP: Destroying namespace "nsdeletetest-6629" for this suite. Sep 7 09:15:09.757: INFO: Namespace nsdeletetest-6629 was already deleted STEP: Destroying namespace "nsdeletetest-4623" for this suite. • [SLOW TEST:6.422 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":281,"skipped":4307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:09.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 7 09:15:09.849: INFO: Waiting up to 5m0s for pod "var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe" in namespace "var-expansion-8125" to be "Succeeded or Failed" Sep 7 09:15:09.877: INFO: Pod "var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe": Phase="Pending", Reason="", readiness=false. Elapsed: 27.65993ms Sep 7 09:15:11.881: INFO: Pod "var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031822914s Sep 7 09:15:13.886: INFO: Pod "var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036507956s STEP: Saw pod success Sep 7 09:15:13.886: INFO: Pod "var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe" satisfied condition "Succeeded or Failed" Sep 7 09:15:13.889: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe container dapi-container: STEP: delete the pod Sep 7 09:15:14.062: INFO: Waiting for pod var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe to disappear Sep 7 09:15:14.193: INFO: Pod var-expansion-e6b45a9b-96f0-43b0-96c9-d448a75ffafe no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:14.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8125" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4347,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:14.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 09:15:14.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 09:15:16.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066914, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066914, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066915, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735066914, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 09:15:20.026: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 7 09:15:24.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config attach --namespace=webhook-1504 to-be-attached-pod -i -c=container1' Sep 7 09:15:24.207: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:24.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1504" for this suite. STEP: Destroying namespace "webhook-1504-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.151 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":283,"skipped":4354,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:24.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 7 09:15:24.462: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:24.470: INFO: Number of nodes with available pods: 0 Sep 7 09:15:24.470: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:25.475: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:25.479: INFO: Number of nodes with available pods: 0 Sep 7 09:15:25.479: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:26.614: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:26.658: INFO: Number of nodes with available pods: 0 Sep 7 09:15:26.658: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:27.477: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:27.480: INFO: Number of nodes with available pods: 0 Sep 7 09:15:27.480: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:28.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:28.480: INFO: Number of nodes with available pods: 1 Sep 7 09:15:28.480: INFO: Node latest-worker2 is running more than one daemon pod Sep 7 09:15:29.530: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:29.709: INFO: Number of nodes with available pods: 2 Sep 7 09:15:29.709: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 7 09:15:30.004: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:30.034: INFO: Number of nodes with available pods: 1 Sep 7 09:15:30.034: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:31.039: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:31.044: INFO: Number of nodes with available pods: 1 Sep 7 09:15:31.044: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:32.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:32.168: INFO: Number of nodes with available pods: 1 Sep 7 09:15:32.168: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:33.039: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:33.043: INFO: Number of nodes with available pods: 1 Sep 7 09:15:33.043: INFO: Node latest-worker is running more than one daemon pod Sep 7 09:15:34.041: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 7 09:15:34.044: INFO: Number of nodes with available pods: 2 Sep 7 09:15:34.044: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-599, will wait for the garbage collector to delete the pods Sep 7 09:15:34.124: INFO: Deleting DaemonSet.extensions daemon-set took: 21.285613ms Sep 7 09:15:34.224: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.260959ms Sep 7 09:15:42.233: INFO: Number of nodes with available pods: 0 Sep 7 09:15:42.233: INFO: Number of running nodes: 0, number of available pods: 0 Sep 7 09:15:42.235: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-599/daemonsets","resourceVersion":"300911"},"items":null} Sep 7 09:15:42.238: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-599/pods","resourceVersion":"300911"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:42.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-599" for this suite. • [SLOW TEST:17.972 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":284,"skipped":4362,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:42.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:46.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1177" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4374,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:46.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-44 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-44 I0907 09:15:46.731632 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-44, replica count: 2 I0907 09:15:49.782086 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 09:15:52.782390 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 09:15:52.782: INFO: Creating new exec pod Sep 7 09:15:57.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodbqjq7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 7 09:15:58.024: INFO: stderr: "I0907 09:15:57.963423 3249 log.go:181] (0xc0003700b0) (0xc000b9e000) Create stream\nI0907 09:15:57.963475 3249 log.go:181] (0xc0003700b0) (0xc000b9e000) Stream added, broadcasting: 1\nI0907 09:15:57.965029 3249 log.go:181] (0xc0003700b0) Reply frame received for 1\nI0907 09:15:57.965067 3249 log.go:181] (0xc0003700b0) (0xc00084b900) Create stream\nI0907 09:15:57.965080 3249 log.go:181] (0xc0003700b0) (0xc00084b900) Stream added, broadcasting: 3\nI0907 09:15:57.965790 3249 log.go:181] (0xc0003700b0) Reply frame received for 3\nI0907 09:15:57.965816 3249 log.go:181] (0xc0003700b0) (0xc000b9e0a0) Create stream\nI0907 09:15:57.965825 3249 log.go:181] (0xc0003700b0) (0xc000b9e0a0) Stream added, broadcasting: 5\nI0907 09:15:57.966368 3249 log.go:181] (0xc0003700b0) Reply frame received for 5\nI0907 09:15:58.015960 3249 log.go:181] (0xc0003700b0) Data frame received for 5\nI0907 09:15:58.015990 3249 log.go:181] (0xc000b9e0a0) (5) Data frame handling\nI0907 09:15:58.016120 3249 log.go:181] (0xc000b9e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0907 09:15:58.016830 3249 log.go:181] (0xc0003700b0) Data frame received for 5\nI0907 09:15:58.016845 3249 log.go:181] (0xc000b9e0a0) (5) Data frame handling\nI0907 09:15:58.016851 3249 log.go:181] (0xc000b9e0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0907 09:15:58.017194 3249 log.go:181] (0xc0003700b0) Data frame received for 3\nI0907 09:15:58.017219 3249 log.go:181] (0xc00084b900) (3) Data frame handling\nI0907 09:15:58.017321 3249 log.go:181] (0xc0003700b0) Data frame received for 5\nI0907 09:15:58.017334 3249 log.go:181] (0xc000b9e0a0) (5) Data frame handling\nI0907 09:15:58.019251 3249 log.go:181] (0xc0003700b0) Data frame received for 1\nI0907 09:15:58.019285 3249 log.go:181] (0xc000b9e000) (1) Data frame handling\nI0907 09:15:58.019301 3249 log.go:181] (0xc000b9e000) (1) Data frame sent\nI0907 09:15:58.019315 3249 log.go:181] (0xc0003700b0) (0xc000b9e000) Stream removed, broadcasting: 1\nI0907 09:15:58.019339 3249 log.go:181] (0xc0003700b0) Go away received\nI0907 09:15:58.019710 3249 log.go:181] (0xc0003700b0) (0xc000b9e000) Stream removed, broadcasting: 1\nI0907 09:15:58.019729 3249 log.go:181] (0xc0003700b0) (0xc00084b900) Stream removed, broadcasting: 3\nI0907 09:15:58.019738 3249 log.go:181] (0xc0003700b0) (0xc000b9e0a0) Stream removed, broadcasting: 5\n" Sep 7 09:15:58.024: INFO: stdout: "" Sep 7 09:15:58.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodbqjq7 -- /bin/sh -x -c nc -zv -t -w 2 10.110.114.167 80' Sep 7 09:15:58.239: INFO: stderr: "I0907 09:15:58.156424 3267 log.go:181] (0xc000e9ae70) (0xc000517f40) Create stream\nI0907 09:15:58.156476 3267 log.go:181] (0xc000e9ae70) (0xc000517f40) Stream added, broadcasting: 1\nI0907 09:15:58.161001 3267 log.go:181] (0xc000e9ae70) Reply frame received for 1\nI0907 09:15:58.161030 3267 log.go:181] (0xc000e9ae70) (0xc0006ac000) Create stream\nI0907 09:15:58.161038 3267 log.go:181] (0xc000e9ae70) (0xc0006ac000) Stream added, broadcasting: 3\nI0907 09:15:58.161869 3267 log.go:181] (0xc000e9ae70) Reply frame received for 3\nI0907 09:15:58.161911 3267 log.go:181] (0xc000e9ae70) (0xc000516280) Create stream\nI0907 09:15:58.161925 3267 log.go:181] (0xc000e9ae70) (0xc000516280) Stream added, broadcasting: 5\nI0907 09:15:58.162995 3267 log.go:181] (0xc000e9ae70) Reply frame received for 5\nI0907 09:15:58.233088 3267 log.go:181] (0xc000e9ae70) Data frame received for 3\nI0907 09:15:58.233135 3267 log.go:181] (0xc0006ac000) (3) Data frame handling\nI0907 09:15:58.233160 3267 log.go:181] (0xc000e9ae70) Data frame received for 5\nI0907 09:15:58.233173 3267 log.go:181] (0xc000516280) (5) Data frame handling\nI0907 09:15:58.233191 3267 log.go:181] (0xc000516280) (5) Data frame sent\nI0907 09:15:58.233202 3267 log.go:181] (0xc000e9ae70) Data frame received for 5\nI0907 09:15:58.233210 3267 log.go:181] (0xc000516280) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.114.167 80\nConnection to 10.110.114.167 80 port [tcp/http] succeeded!\nI0907 09:15:58.234494 3267 log.go:181] (0xc000e9ae70) Data frame received for 1\nI0907 09:15:58.234518 3267 log.go:181] (0xc000517f40) (1) Data frame handling\nI0907 09:15:58.234527 3267 log.go:181] (0xc000517f40) (1) Data frame sent\nI0907 09:15:58.234540 3267 log.go:181] (0xc000e9ae70) (0xc000517f40) Stream removed, broadcasting: 1\nI0907 09:15:58.234557 3267 log.go:181] (0xc000e9ae70) Go away received\nI0907 09:15:58.235066 3267 log.go:181] (0xc000e9ae70) (0xc000517f40) Stream removed, broadcasting: 1\nI0907 09:15:58.235092 3267 log.go:181] (0xc000e9ae70) (0xc0006ac000) Stream removed, broadcasting: 3\nI0907 09:15:58.235105 3267 log.go:181] (0xc000e9ae70) (0xc000516280) Stream removed, broadcasting: 5\n" Sep 7 09:15:58.239: INFO: stdout: "" Sep 7 09:15:58.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodbqjq7 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30255' Sep 7 09:15:58.457: INFO: stderr: "I0907 09:15:58.371858 3285 log.go:181] (0xc0006c4000) (0xc0006bc000) Create stream\nI0907 09:15:58.371947 3285 log.go:181] (0xc0006c4000) (0xc0006bc000) Stream added, broadcasting: 1\nI0907 09:15:58.373843 3285 log.go:181] (0xc0006c4000) Reply frame received for 1\nI0907 09:15:58.373877 3285 log.go:181] (0xc0006c4000) (0xc000752140) Create stream\nI0907 09:15:58.373887 3285 log.go:181] (0xc0006c4000) (0xc000752140) Stream added, broadcasting: 3\nI0907 09:15:58.374805 3285 log.go:181] (0xc0006c4000) Reply frame received for 3\nI0907 09:15:58.374839 3285 log.go:181] (0xc0006c4000) (0xc0007521e0) Create stream\nI0907 09:15:58.374857 3285 log.go:181] (0xc0006c4000) (0xc0007521e0) Stream added, broadcasting: 5\nI0907 09:15:58.375765 3285 log.go:181] (0xc0006c4000) Reply frame received for 5\nI0907 09:15:58.447526 3285 log.go:181] (0xc0006c4000) Data frame received for 3\nI0907 09:15:58.447571 3285 log.go:181] (0xc000752140) (3) Data frame handling\nI0907 09:15:58.447596 3285 log.go:181] (0xc0006c4000) Data frame received for 5\nI0907 09:15:58.447608 3285 log.go:181] (0xc0007521e0) (5) Data frame handling\nI0907 09:15:58.447621 3285 log.go:181] (0xc0007521e0) (5) Data frame sent\nI0907 09:15:58.447632 3285 log.go:181] (0xc0006c4000) Data frame received for 5\nI0907 09:15:58.447641 3285 log.go:181] (0xc0007521e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30255\nConnection to 172.18.0.15 30255 port [tcp/30255] succeeded!\nI0907 09:15:58.452492 3285 log.go:181] (0xc0006c4000) Data frame received for 1\nI0907 09:15:58.452530 3285 log.go:181] (0xc0006bc000) (1) Data frame handling\nI0907 09:15:58.452560 3285 log.go:181] (0xc0006bc000) (1) Data frame sent\nI0907 09:15:58.452576 3285 log.go:181] (0xc0006c4000) (0xc0006bc000) Stream removed, broadcasting: 1\nI0907 09:15:58.452590 3285 log.go:181] (0xc0006c4000) Go away received\nI0907 09:15:58.453055 3285 log.go:181] (0xc0006c4000) (0xc0006bc000) Stream removed, broadcasting: 1\nI0907 09:15:58.453075 3285 log.go:181] (0xc0006c4000) (0xc000752140) Stream removed, broadcasting: 3\nI0907 09:15:58.453084 3285 log.go:181] (0xc0006c4000) (0xc0007521e0) Stream removed, broadcasting: 5\n" Sep 7 09:15:58.457: INFO: stdout: "" Sep 7 09:15:58.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-44 execpodbqjq7 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30255' Sep 7 09:15:58.688: INFO: stderr: "I0907 09:15:58.607525 3303 log.go:181] (0xc000ea1290) (0xc000e98960) Create stream\nI0907 09:15:58.607583 3303 log.go:181] (0xc000ea1290) (0xc000e98960) Stream added, broadcasting: 1\nI0907 09:15:58.612969 3303 log.go:181] (0xc000ea1290) Reply frame received for 1\nI0907 09:15:58.613037 3303 log.go:181] (0xc000ea1290) (0xc000c12000) Create stream\nI0907 09:15:58.613066 3303 log.go:181] (0xc000ea1290) (0xc000c12000) Stream added, broadcasting: 3\nI0907 09:15:58.614038 3303 log.go:181] (0xc000ea1290) Reply frame received for 3\nI0907 09:15:58.614077 3303 log.go:181] (0xc000ea1290) (0xc000e98000) Create stream\nI0907 09:15:58.614088 3303 log.go:181] (0xc000ea1290) (0xc000e98000) Stream added, broadcasting: 5\nI0907 09:15:58.614911 3303 log.go:181] (0xc000ea1290) Reply frame received for 5\nI0907 09:15:58.680734 3303 log.go:181] (0xc000ea1290) Data frame received for 5\nI0907 09:15:58.680783 3303 log.go:181] (0xc000e98000) (5) Data frame handling\nI0907 09:15:58.680822 3303 log.go:181] (0xc000e98000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30255\nConnection to 172.18.0.14 30255 port [tcp/30255] succeeded!\nI0907 09:15:58.681344 3303 log.go:181] (0xc000ea1290) Data frame received for 5\nI0907 09:15:58.681363 3303 log.go:181] (0xc000e98000) (5) Data frame handling\nI0907 09:15:58.681383 3303 log.go:181] (0xc000ea1290) Data frame received for 3\nI0907 09:15:58.681392 3303 log.go:181] (0xc000c12000) (3) Data frame handling\nI0907 09:15:58.683113 3303 log.go:181] (0xc000ea1290) Data frame received for 1\nI0907 09:15:58.683155 3303 log.go:181] (0xc000e98960) (1) Data frame handling\nI0907 09:15:58.683198 3303 log.go:181] (0xc000e98960) (1) Data frame sent\nI0907 09:15:58.683229 3303 log.go:181] (0xc000ea1290) (0xc000e98960) Stream removed, broadcasting: 1\nI0907 09:15:58.683250 3303 log.go:181] (0xc000ea1290) Go away received\nI0907 09:15:58.683591 3303 log.go:181] (0xc000ea1290) (0xc000e98960) Stream removed, broadcasting: 1\nI0907 09:15:58.683607 3303 log.go:181] (0xc000ea1290) (0xc000c12000) Stream removed, broadcasting: 3\nI0907 09:15:58.683615 3303 log.go:181] (0xc000ea1290) (0xc000e98000) Stream removed, broadcasting: 5\n" Sep 7 09:15:58.688: INFO: stdout: "" Sep 7 09:15:58.688: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:15:58.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-44" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.250 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":286,"skipped":4381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:15:58.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8314 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 7 09:15:58.893: INFO: Found 0 stateful pods, waiting for 3 Sep 7 09:16:08.899: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:16:08.899: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:16:08.899: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 7 09:16:18.898: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:16:18.898: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:16:18.898: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 7 09:16:18.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8314 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 09:16:19.172: INFO: stderr: "I0907 09:16:19.055833 3321 log.go:181] (0xc00022c6e0) (0xc000c0a280) Create stream\nI0907 09:16:19.056110 3321 log.go:181] (0xc00022c6e0) (0xc000c0a280) Stream added, broadcasting: 1\nI0907 09:16:19.060336 3321 log.go:181] (0xc00022c6e0) Reply frame received for 1\nI0907 09:16:19.060383 3321 log.go:181] (0xc00022c6e0) (0xc00083c000) Create stream\nI0907 09:16:19.060406 3321 log.go:181] (0xc00022c6e0) (0xc00083c000) Stream added, broadcasting: 3\nI0907 09:16:19.061558 3321 log.go:181] (0xc00022c6e0) Reply frame received for 3\nI0907 09:16:19.061613 3321 log.go:181] (0xc00022c6e0) (0xc000934000) Create stream\nI0907 09:16:19.061628 3321 log.go:181] (0xc00022c6e0) (0xc000934000) Stream added, broadcasting: 5\nI0907 09:16:19.062658 3321 log.go:181] (0xc00022c6e0) Reply frame received for 5\nI0907 09:16:19.137370 3321 log.go:181] (0xc00022c6e0) Data frame received for 5\nI0907 09:16:19.137400 3321 log.go:181] (0xc000934000) (5) Data frame handling\nI0907 09:16:19.137425 3321 log.go:181] (0xc000934000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 09:16:19.163190 3321 log.go:181] (0xc00022c6e0) Data frame received for 3\nI0907 09:16:19.163230 3321 log.go:181] (0xc00083c000) (3) Data frame handling\nI0907 09:16:19.163270 3321 log.go:181] (0xc00083c000) (3) Data frame sent\nI0907 09:16:19.163289 3321 log.go:181] (0xc00022c6e0) Data frame received for 3\nI0907 09:16:19.163300 3321 log.go:181] (0xc00083c000) (3) Data frame handling\nI0907 09:16:19.163393 3321 log.go:181] (0xc00022c6e0) Data frame received for 5\nI0907 09:16:19.163479 3321 log.go:181] (0xc000934000) (5) Data frame handling\nI0907 09:16:19.167637 3321 log.go:181] (0xc00022c6e0) Data frame received for 1\nI0907 09:16:19.167660 3321 log.go:181] (0xc000c0a280) (1) Data frame handling\nI0907 09:16:19.167668 3321 log.go:181] (0xc000c0a280) (1) Data frame sent\nI0907 09:16:19.167677 3321 log.go:181] (0xc00022c6e0) (0xc000c0a280) Stream removed, broadcasting: 1\nI0907 09:16:19.167693 3321 log.go:181] (0xc00022c6e0) Go away received\nI0907 09:16:19.168328 3321 log.go:181] (0xc00022c6e0) (0xc000c0a280) Stream removed, broadcasting: 1\nI0907 09:16:19.168355 3321 log.go:181] (0xc00022c6e0) (0xc00083c000) Stream removed, broadcasting: 3\nI0907 09:16:19.168375 3321 log.go:181] (0xc00022c6e0) (0xc000934000) Stream removed, broadcasting: 5\n" Sep 7 09:16:19.172: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 09:16:19.172: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 7 09:16:29.202: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 7 09:16:39.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8314 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 09:16:39.525: INFO: stderr: "I0907 09:16:39.418474 3339 log.go:181] (0xc0006b3a20) (0xc00062c640) Create stream\nI0907 09:16:39.418543 3339 log.go:181] (0xc0006b3a20) (0xc00062c640) Stream added, broadcasting: 1\nI0907 09:16:39.422788 3339 log.go:181] (0xc0006b3a20) Reply frame received for 1\nI0907 09:16:39.422878 3339 log.go:181] (0xc0006b3a20) (0xc00062c000) Create stream\nI0907 09:16:39.422896 3339 log.go:181] (0xc0006b3a20) (0xc00062c000) Stream added, broadcasting: 3\nI0907 09:16:39.423803 3339 log.go:181] (0xc0006b3a20) Reply frame received for 3\nI0907 09:16:39.423858 3339 log.go:181] (0xc0006b3a20) (0xc00062c0a0) Create stream\nI0907 09:16:39.423875 3339 log.go:181] (0xc0006b3a20) (0xc00062c0a0) Stream added, broadcasting: 5\nI0907 09:16:39.424862 3339 log.go:181] (0xc0006b3a20) Reply frame received for 5\nI0907 09:16:39.518423 3339 log.go:181] (0xc0006b3a20) Data frame received for 3\nI0907 09:16:39.518486 3339 log.go:181] (0xc00062c000) (3) Data frame handling\nI0907 09:16:39.518516 3339 log.go:181] (0xc00062c000) (3) Data frame sent\nI0907 09:16:39.518550 3339 log.go:181] (0xc0006b3a20) Data frame received for 3\nI0907 09:16:39.518591 3339 log.go:181] (0xc00062c000) (3) Data frame handling\nI0907 09:16:39.518671 3339 log.go:181] (0xc0006b3a20) Data frame received for 5\nI0907 09:16:39.518715 3339 log.go:181] (0xc00062c0a0) (5) Data frame handling\nI0907 09:16:39.518766 3339 log.go:181] (0xc00062c0a0) (5) Data frame sent\nI0907 09:16:39.518808 3339 log.go:181] (0xc0006b3a20) Data frame received for 5\nI0907 09:16:39.518832 3339 log.go:181] (0xc00062c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 09:16:39.521612 3339 log.go:181] (0xc0006b3a20) Data frame received for 1\nI0907 09:16:39.521629 3339 log.go:181] (0xc00062c640) (1) Data frame handling\nI0907 09:16:39.521644 3339 log.go:181] (0xc00062c640) (1) Data frame sent\nI0907 09:16:39.521728 3339 log.go:181] (0xc0006b3a20) (0xc00062c640) Stream removed, broadcasting: 1\nI0907 09:16:39.521793 3339 log.go:181] (0xc0006b3a20) Go away received\nI0907 09:16:39.522015 3339 log.go:181] (0xc0006b3a20) (0xc00062c640) Stream removed, broadcasting: 1\nI0907 09:16:39.522029 3339 log.go:181] (0xc0006b3a20) (0xc00062c000) Stream removed, broadcasting: 3\nI0907 09:16:39.522034 3339 log.go:181] (0xc0006b3a20) (0xc00062c0a0) Stream removed, broadcasting: 5\n" Sep 7 09:16:39.525: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 09:16:39.525: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 09:16:49.547: INFO: Waiting for StatefulSet statefulset-8314/ss2 to complete update Sep 7 09:16:49.547: INFO: Waiting for Pod statefulset-8314/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:16:49.547: INFO: Waiting for Pod statefulset-8314/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:16:49.547: INFO: Waiting for Pod statefulset-8314/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:16:59.553: INFO: Waiting for StatefulSet statefulset-8314/ss2 to complete update Sep 7 09:16:59.553: INFO: Waiting for Pod statefulset-8314/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:16:59.553: INFO: Waiting for Pod statefulset-8314/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 7 09:17:09.592: INFO: Waiting for StatefulSet statefulset-8314/ss2 to complete update STEP: Rolling back to a previous revision Sep 7 09:17:19.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8314 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 7 09:17:19.828: INFO: stderr: "I0907 09:17:19.697985 3358 log.go:181] (0xc000250000) (0xc00083a000) Create stream\nI0907 09:17:19.698082 3358 log.go:181] (0xc000250000) (0xc00083a000) Stream added, broadcasting: 1\nI0907 09:17:19.700184 3358 log.go:181] (0xc000250000) Reply frame received for 1\nI0907 09:17:19.700233 3358 log.go:181] (0xc000250000) (0xc000aafea0) Create stream\nI0907 09:17:19.700261 3358 log.go:181] (0xc000250000) (0xc000aafea0) Stream added, broadcasting: 3\nI0907 09:17:19.701138 3358 log.go:181] (0xc000250000) Reply frame received for 3\nI0907 09:17:19.701178 3358 log.go:181] (0xc000250000) (0xc00039e140) Create stream\nI0907 09:17:19.701193 3358 log.go:181] (0xc000250000) (0xc00039e140) Stream added, broadcasting: 5\nI0907 09:17:19.701793 3358 log.go:181] (0xc000250000) Reply frame received for 5\nI0907 09:17:19.785856 3358 log.go:181] (0xc000250000) Data frame received for 5\nI0907 09:17:19.785889 3358 log.go:181] (0xc00039e140) (5) Data frame handling\nI0907 09:17:19.785909 3358 log.go:181] (0xc00039e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0907 09:17:19.819870 3358 log.go:181] (0xc000250000) Data frame received for 3\nI0907 09:17:19.819902 3358 log.go:181] (0xc000aafea0) (3) Data frame handling\nI0907 09:17:19.819933 3358 log.go:181] (0xc000aafea0) (3) Data frame sent\nI0907 09:17:19.820417 3358 log.go:181] (0xc000250000) Data frame received for 3\nI0907 09:17:19.820453 3358 log.go:181] (0xc000aafea0) (3) Data frame handling\nI0907 09:17:19.820482 3358 log.go:181] (0xc000250000) Data frame received for 5\nI0907 09:17:19.820501 3358 log.go:181] (0xc00039e140) (5) Data frame handling\nI0907 09:17:19.822319 3358 log.go:181] (0xc000250000) Data frame received for 1\nI0907 09:17:19.822343 3358 log.go:181] (0xc00083a000) (1) Data frame handling\nI0907 09:17:19.822358 3358 log.go:181] (0xc00083a000) (1) Data frame sent\nI0907 09:17:19.822370 3358 log.go:181] (0xc000250000) (0xc00083a000) Stream removed, broadcasting: 1\nI0907 09:17:19.822390 3358 log.go:181] (0xc000250000) Go away received\nI0907 09:17:19.822967 3358 log.go:181] (0xc000250000) (0xc00083a000) Stream removed, broadcasting: 1\nI0907 09:17:19.822994 3358 log.go:181] (0xc000250000) (0xc000aafea0) Stream removed, broadcasting: 3\nI0907 09:17:19.823008 3358 log.go:181] (0xc000250000) (0xc00039e140) Stream removed, broadcasting: 5\n" Sep 7 09:17:19.828: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 7 09:17:19.828: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 7 09:17:29.865: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 7 09:17:39.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8314 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 7 09:17:40.231: INFO: stderr: "I0907 09:17:40.128707 3377 log.go:181] (0xc000444e70) (0xc00051e500) Create stream\nI0907 09:17:40.128769 3377 log.go:181] (0xc000444e70) (0xc00051e500) Stream added, broadcasting: 1\nI0907 09:17:40.133310 3377 log.go:181] (0xc000444e70) Reply frame received for 1\nI0907 09:17:40.133351 3377 log.go:181] (0xc000444e70) (0xc0004840a0) Create stream\nI0907 09:17:40.133363 3377 log.go:181] (0xc000444e70) (0xc0004840a0) Stream added, broadcasting: 3\nI0907 09:17:40.134160 3377 log.go:181] (0xc000444e70) Reply frame received for 3\nI0907 09:17:40.134189 3377 log.go:181] (0xc000444e70) (0xc0001ac1e0) Create stream\nI0907 09:17:40.134198 3377 log.go:181] (0xc000444e70) (0xc0001ac1e0) Stream added, broadcasting: 5\nI0907 09:17:40.135009 3377 log.go:181] (0xc000444e70) Reply frame received for 5\nI0907 09:17:40.223661 3377 log.go:181] (0xc000444e70) Data frame received for 5\nI0907 09:17:40.223717 3377 log.go:181] (0xc0001ac1e0) (5) Data frame handling\nI0907 09:17:40.223747 3377 log.go:181] (0xc0001ac1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0907 09:17:40.223778 3377 log.go:181] (0xc000444e70) Data frame received for 3\nI0907 09:17:40.223794 3377 log.go:181] (0xc0004840a0) (3) Data frame handling\nI0907 09:17:40.223811 3377 log.go:181] (0xc0004840a0) (3) Data frame sent\nI0907 09:17:40.223828 3377 log.go:181] (0xc000444e70) Data frame received for 3\nI0907 09:17:40.223843 3377 log.go:181] (0xc0004840a0) (3) Data frame handling\nI0907 09:17:40.223923 3377 log.go:181] (0xc000444e70) Data frame received for 5\nI0907 09:17:40.223992 3377 log.go:181] (0xc0001ac1e0) (5) Data frame handling\nI0907 09:17:40.225811 3377 log.go:181] (0xc000444e70) Data frame received for 1\nI0907 09:17:40.225843 3377 log.go:181] (0xc00051e500) (1) Data frame handling\nI0907 09:17:40.225863 3377 log.go:181] (0xc00051e500) (1) Data frame sent\nI0907 09:17:40.225889 3377 log.go:181] (0xc000444e70) (0xc00051e500) Stream removed, broadcasting: 1\nI0907 09:17:40.225914 3377 log.go:181] (0xc000444e70) Go away received\nI0907 09:17:40.226461 3377 log.go:181] (0xc000444e70) (0xc00051e500) Stream removed, broadcasting: 1\nI0907 09:17:40.226486 3377 log.go:181] (0xc000444e70) (0xc0004840a0) Stream removed, broadcasting: 3\nI0907 09:17:40.226498 3377 log.go:181] (0xc000444e70) (0xc0001ac1e0) Stream removed, broadcasting: 5\n" Sep 7 09:17:40.231: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 7 09:17:40.231: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 7 09:17:50.255: INFO: Waiting for StatefulSet statefulset-8314/ss2 to complete update Sep 7 09:17:50.255: INFO: Waiting for Pod statefulset-8314/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 7 09:17:50.255: INFO: Waiting for Pod statefulset-8314/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 7 09:17:50.255: INFO: Waiting for Pod statefulset-8314/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 7 09:18:00.263: INFO: Waiting for StatefulSet statefulset-8314/ss2 to complete update Sep 7 09:18:00.263: INFO: Waiting for Pod statefulset-8314/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 7 09:18:00.263: INFO: Waiting for Pod statefulset-8314/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 7 09:18:10.266: INFO: Waiting for StatefulSet statefulset-8314/ss2 to complete update Sep 7 09:18:10.266: INFO: Waiting for Pod statefulset-8314/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 7 09:18:20.265: INFO: Deleting all statefulset in ns statefulset-8314 Sep 7 09:18:20.267: INFO: Scaling statefulset ss2 to 0 Sep 7 09:18:40.320: INFO: Waiting for statefulset status.replicas updated to 0 Sep 7 09:18:40.322: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:18:40.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8314" for this suite. • [SLOW TEST:161.612 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":287,"skipped":4421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:18:40.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 7 09:18:40.438: INFO: Waiting up to 5m0s for pod "downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f" in namespace "downward-api-2154" to be "Succeeded or Failed" Sep 7 09:18:40.455: INFO: Pod "downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.593299ms Sep 7 09:18:42.459: INFO: Pod "downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02113388s Sep 7 09:18:44.464: INFO: Pod "downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025636663s STEP: Saw pod success Sep 7 09:18:44.464: INFO: Pod "downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f" satisfied condition "Succeeded or Failed" Sep 7 09:18:44.467: INFO: Trying to get logs from node latest-worker2 pod downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f container dapi-container: STEP: delete the pod Sep 7 09:18:44.556: INFO: Waiting for pod downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f to disappear Sep 7 09:18:44.621: INFO: Pod downward-api-79e63b07-85a9-4ecd-8f1f-c9127d82b26f no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:18:44.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2154" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:18:44.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4339 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4339 STEP: creating replication controller externalsvc in namespace services-4339 I0907 09:18:44.832496 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4339, replica count: 2 I0907 09:18:47.882906 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 09:18:50.883181 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 7 09:18:50.977: INFO: Creating new exec pod Sep 7 09:18:55.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-4339 execpodxpxpj -- /bin/sh -x -c nslookup nodeport-service.services-4339.svc.cluster.local' Sep 7 09:18:55.316: INFO: stderr: "I0907 09:18:55.209197 3395 log.go:181] (0xc000958fd0) (0xc00031c460) Create stream\nI0907 09:18:55.209278 3395 log.go:181] (0xc000958fd0) (0xc00031c460) Stream added, broadcasting: 1\nI0907 09:18:55.214948 3395 log.go:181] (0xc000958fd0) Reply frame received for 1\nI0907 09:18:55.214983 3395 log.go:181] (0xc000958fd0) (0xc00031cc80) Create stream\nI0907 09:18:55.214992 3395 log.go:181] (0xc000958fd0) (0xc00031cc80) Stream added, broadcasting: 3\nI0907 09:18:55.215863 3395 log.go:181] (0xc000958fd0) Reply frame received for 3\nI0907 09:18:55.215894 3395 log.go:181] (0xc000958fd0) (0xc00090dea0) Create stream\nI0907 09:18:55.215901 3395 log.go:181] (0xc000958fd0) (0xc00090dea0) Stream added, broadcasting: 5\nI0907 09:18:55.216846 3395 log.go:181] (0xc000958fd0) Reply frame received for 5\nI0907 09:18:55.295317 3395 log.go:181] (0xc000958fd0) Data frame received for 5\nI0907 09:18:55.295347 3395 log.go:181] (0xc00090dea0) (5) Data frame handling\nI0907 09:18:55.295368 3395 log.go:181] (0xc00090dea0) (5) Data frame sent\n+ nslookup nodeport-service.services-4339.svc.cluster.local\nI0907 09:18:55.307112 3395 log.go:181] (0xc000958fd0) Data frame received for 3\nI0907 09:18:55.307146 3395 log.go:181] (0xc00031cc80) (3) Data frame handling\nI0907 09:18:55.307180 3395 log.go:181] (0xc00031cc80) (3) Data frame sent\nI0907 09:18:55.308547 3395 log.go:181] (0xc000958fd0) Data frame received for 3\nI0907 09:18:55.308579 3395 log.go:181] (0xc00031cc80) (3) Data frame handling\nI0907 09:18:55.308607 3395 log.go:181] (0xc00031cc80) (3) Data frame sent\nI0907 09:18:55.308839 3395 log.go:181] (0xc000958fd0) Data frame received for 3\nI0907 09:18:55.308873 3395 log.go:181] (0xc00031cc80) (3) Data frame handling\nI0907 09:18:55.309096 3395 log.go:181] (0xc000958fd0) Data frame received for 5\nI0907 09:18:55.309142 3395 log.go:181] (0xc00090dea0) (5) Data frame handling\nI0907 09:18:55.311855 3395 log.go:181] (0xc000958fd0) Data frame received for 1\nI0907 09:18:55.311886 3395 log.go:181] (0xc00031c460) (1) Data frame handling\nI0907 09:18:55.311908 3395 log.go:181] (0xc00031c460) (1) Data frame sent\nI0907 09:18:55.311939 3395 log.go:181] (0xc000958fd0) (0xc00031c460) Stream removed, broadcasting: 1\nI0907 09:18:55.311978 3395 log.go:181] (0xc000958fd0) Go away received\nI0907 09:18:55.312676 3395 log.go:181] (0xc000958fd0) (0xc00031c460) Stream removed, broadcasting: 1\nI0907 09:18:55.312693 3395 log.go:181] (0xc000958fd0) (0xc00031cc80) Stream removed, broadcasting: 3\nI0907 09:18:55.312701 3395 log.go:181] (0xc000958fd0) (0xc00090dea0) Stream removed, broadcasting: 5\n" Sep 7 09:18:55.316: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4339.svc.cluster.local\tcanonical name = externalsvc.services-4339.svc.cluster.local.\nName:\texternalsvc.services-4339.svc.cluster.local\nAddress: 10.107.55.192\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4339, will wait for the garbage collector to delete the pods Sep 7 09:18:55.376: INFO: Deleting ReplicationController externalsvc took: 6.171408ms Sep 7 09:18:55.777: INFO: Terminating ReplicationController externalsvc pods took: 400.190355ms Sep 7 09:19:02.328: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:02.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4339" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.770 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":289,"skipped":4518,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:02.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 7 09:19:02.456: INFO: Waiting up to 5m0s for pod "pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295" in namespace "emptydir-1662" to be "Succeeded or Failed" Sep 7 09:19:02.470: INFO: Pod "pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295": Phase="Pending", Reason="", readiness=false. Elapsed: 13.954035ms Sep 7 09:19:04.475: INFO: Pod "pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018448585s Sep 7 09:19:06.480: INFO: Pod "pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295": Phase="Running", Reason="", readiness=true. Elapsed: 4.023553637s Sep 7 09:19:08.485: INFO: Pod "pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028402166s STEP: Saw pod success Sep 7 09:19:08.485: INFO: Pod "pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295" satisfied condition "Succeeded or Failed" Sep 7 09:19:08.489: INFO: Trying to get logs from node latest-worker2 pod pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295 container test-container: STEP: delete the pod Sep 7 09:19:08.504: INFO: Waiting for pod pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295 to disappear Sep 7 09:19:08.508: INFO: Pod pod-ea1b689c-a4f2-4ca3-b275-40a4bb96f295 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:08.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1662" for this suite. • [SLOW TEST:6.114 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4521,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:08.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 7 09:19:09.118: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 7 09:19:11.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 7 09:19:13.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735067149, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 7 09:19:16.317: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:16.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3832" for this suite. STEP: Destroying namespace "webhook-3832-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.019 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":291,"skipped":4534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:16.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-218fa360-4fdb-4832-9169-034a872f1608 STEP: Creating a pod to test consume secrets Sep 7 09:19:16.619: INFO: Waiting up to 5m0s for pod "pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29" in namespace "secrets-8653" to be "Succeeded or Failed" Sep 7 09:19:16.658: INFO: Pod "pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29": Phase="Pending", Reason="", readiness=false. Elapsed: 39.212349ms Sep 7 09:19:18.663: INFO: Pod "pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043932394s Sep 7 09:19:20.682: INFO: Pod "pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063151893s STEP: Saw pod success Sep 7 09:19:20.682: INFO: Pod "pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29" satisfied condition "Succeeded or Failed" Sep 7 09:19:20.685: INFO: Trying to get logs from node latest-worker pod pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29 container secret-volume-test: STEP: delete the pod Sep 7 09:19:20.721: INFO: Waiting for pod pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29 to disappear Sep 7 09:19:20.737: INFO: Pod pod-secrets-51cefb64-f8d7-404a-bfdc-4e3b685e3f29 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:20.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8653" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:20.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:20.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4233" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":293,"skipped":4605,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:20.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 7 09:19:20.901: INFO: Waiting up to 5m0s for pod "var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818" in namespace "var-expansion-318" to be "Succeeded or Failed" Sep 7 09:19:20.905: INFO: Pod "var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805568ms Sep 7 09:19:22.910: INFO: Pod "var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008735491s Sep 7 09:19:25.006: INFO: Pod "var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818": Phase="Running", Reason="", readiness=true. Elapsed: 4.104861538s Sep 7 09:19:27.011: INFO: Pod "var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109724719s STEP: Saw pod success Sep 7 09:19:27.011: INFO: Pod "var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818" satisfied condition "Succeeded or Failed" Sep 7 09:19:27.014: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818 container dapi-container: STEP: delete the pod Sep 7 09:19:27.047: INFO: Waiting for pod var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818 to disappear Sep 7 09:19:27.067: INFO: Pod var-expansion-e0c9ea05-8613-4251-9b17-bc238c2ac818 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:27.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-318" for this suite. • [SLOW TEST:6.240 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4616,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:27.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5440.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5440.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5440.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5440.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5440.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5440.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 7 09:19:33.266: INFO: DNS probes using dns-5440/dns-test-c020aaa5-37e0-46ba-91e4-d5fd676eee03 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:19:33.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5440" for this suite. • [SLOW TEST:6.249 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":295,"skipped":4628,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:19:33.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9505 STEP: creating service affinity-nodeport-transition in namespace services-9505 STEP: creating replication controller affinity-nodeport-transition in namespace services-9505 I0907 09:19:33.906761 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9505, replica count: 3 I0907 09:19:36.957164 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 09:19:39.957478 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0907 09:19:42.957696 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 7 09:19:42.968: INFO: Creating new exec pod Sep 7 09:19:47.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-9505 execpod-affinityhk2tw -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 7 09:19:48.214: INFO: stderr: "I0907 09:19:48.135832 3413 log.go:181] (0xc000324dc0) (0xc000525720) Create stream\nI0907 09:19:48.135880 3413 log.go:181] (0xc000324dc0) (0xc000525720) Stream added, broadcasting: 1\nI0907 09:19:48.142096 3413 log.go:181] (0xc000324dc0) Reply frame received for 1\nI0907 09:19:48.142169 3413 log.go:181] (0xc000324dc0) (0xc000524140) Create stream\nI0907 09:19:48.142202 3413 log.go:181] (0xc000324dc0) (0xc000524140) Stream added, broadcasting: 3\nI0907 09:19:48.143284 3413 log.go:181] (0xc000324dc0) Reply frame received for 3\nI0907 09:19:48.143341 3413 log.go:181] (0xc000324dc0) (0xc000c0e0a0) Create stream\nI0907 09:19:48.143358 3413 log.go:181] (0xc000324dc0) (0xc000c0e0a0) Stream added, broadcasting: 5\nI0907 09:19:48.144297 3413 log.go:181] (0xc000324dc0) Reply frame received for 5\nI0907 09:19:48.207942 3413 log.go:181] (0xc000324dc0) Data frame received for 5\nI0907 09:19:48.207971 3413 log.go:181] (0xc000c0e0a0) (5) Data frame handling\nI0907 09:19:48.207980 3413 log.go:181] (0xc000c0e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0907 09:19:48.208174 3413 log.go:181] (0xc000324dc0) Data frame received for 5\nI0907 09:19:48.208205 3413 log.go:181] (0xc000c0e0a0) (5) Data frame handling\nI0907 09:19:48.208230 3413 log.go:181] (0xc000c0e0a0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0907 09:19:48.208692 3413 log.go:181] (0xc000324dc0) Data frame received for 3\nI0907 09:19:48.208790 3413 log.go:181] (0xc000524140) (3) Data frame handling\nI0907 09:19:48.208875 3413 log.go:181] (0xc000324dc0) Data frame received for 5\nI0907 09:19:48.208911 3413 log.go:181] (0xc000c0e0a0) (5) Data frame handling\nI0907 09:19:48.210467 3413 log.go:181] (0xc000324dc0) Data frame received for 1\nI0907 09:19:48.210539 3413 log.go:181] (0xc000525720) (1) Data frame handling\nI0907 09:19:48.210559 3413 log.go:181] (0xc000525720) (1) Data frame sent\nI0907 09:19:48.210660 3413 log.go:181] (0xc000324dc0) (0xc000525720) Stream removed, broadcasting: 1\nI0907 09:19:48.210767 3413 log.go:181] (0xc000324dc0) Go away received\nI0907 09:19:48.211145 3413 log.go:181] (0xc000324dc0) (0xc000525720) Stream removed, broadcasting: 1\nI0907 09:19:48.211173 3413 log.go:181] (0xc000324dc0) (0xc000524140) Stream removed, broadcasting: 3\nI0907 09:19:48.211188 3413 log.go:181] (0xc000324dc0) (0xc000c0e0a0) Stream removed, broadcasting: 5\n" Sep 7 09:19:48.215: INFO: stdout: "" Sep 7 09:19:48.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-9505 execpod-affinityhk2tw -- /bin/sh -x -c nc -zv -t -w 2 10.100.154.145 80' Sep 7 09:19:48.441: INFO: stderr: "I0907 09:19:48.352865 3431 log.go:181] (0xc000bd0000) (0xc0001eadc0) Create stream\nI0907 09:19:48.352931 3431 log.go:181] (0xc000bd0000) (0xc0001eadc0) Stream added, broadcasting: 1\nI0907 09:19:48.357926 3431 log.go:181] (0xc000bd0000) Reply frame received for 1\nI0907 09:19:48.357997 3431 log.go:181] (0xc000bd0000) (0xc000394e60) Create stream\nI0907 09:19:48.358021 3431 log.go:181] (0xc000bd0000) (0xc000394e60) Stream added, broadcasting: 3\nI0907 09:19:48.358918 3431 log.go:181] (0xc000bd0000) Reply frame received for 3\nI0907 09:19:48.358958 3431 log.go:181] (0xc000bd0000) (0xc000395680) Create stream\nI0907 09:19:48.358969 3431 log.go:181] (0xc000bd0000) (0xc000395680) Stream added, broadcasting: 5\nI0907 09:19:48.359859 3431 log.go:181] (0xc000bd0000) Reply frame received for 5\nI0907 09:19:48.433686 3431 log.go:181] (0xc000bd0000) Data frame received for 3\nI0907 09:19:48.433746 3431 log.go:181] (0xc000394e60) (3) Data frame handling\nI0907 09:19:48.433781 3431 log.go:181] (0xc000bd0000) Data frame received for 5\nI0907 09:19:48.433800 3431 log.go:181] (0xc000395680) (5) Data frame handling\nI0907 09:19:48.433831 3431 log.go:181] (0xc000395680) (5) Data frame sent\nI0907 09:19:48.433852 3431 log.go:181] (0xc000bd0000) Data frame received for 5\nI0907 09:19:48.433870 3431 log.go:181] (0xc000395680) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.154.145 80\nConnection to 10.100.154.145 80 port [tcp/http] succeeded!\nI0907 09:19:48.435723 3431 log.go:181] (0xc000bd0000) Data frame received for 1\nI0907 09:19:48.435765 3431 log.go:181] (0xc0001eadc0) (1) Data frame handling\nI0907 09:19:48.435793 3431 log.go:181] (0xc0001eadc0) (1) Data frame sent\nI0907 09:19:48.435823 3431 log.go:181] (0xc000bd0000) (0xc0001eadc0) Stream removed, broadcasting: 1\nI0907 09:19:48.435863 3431 log.go:181] (0xc000bd0000) Go away received\nI0907 09:19:48.436519 3431 log.go:181] (0xc000bd0000) (0xc0001eadc0) Stream removed, broadcasting: 1\nI0907 09:19:48.436545 3431 log.go:181] (0xc000bd0000) (0xc000394e60) Stream removed, broadcasting: 3\nI0907 09:19:48.436575 3431 log.go:181] (0xc000bd0000) (0xc000395680) Stream removed, broadcasting: 5\n" Sep 7 09:19:48.441: INFO: stdout: "" Sep 7 09:19:48.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-9505 execpod-affinityhk2tw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31440' Sep 7 09:19:48.655: INFO: stderr: "I0907 09:19:48.589353 3449 log.go:181] (0xc0007b33f0) (0xc0007328c0) Create stream\nI0907 09:19:48.589407 3449 log.go:181] (0xc0007b33f0) (0xc0007328c0) Stream added, broadcasting: 1\nI0907 09:19:48.595101 3449 log.go:181] (0xc0007b33f0) Reply frame received for 1\nI0907 09:19:48.595138 3449 log.go:181] (0xc0007b33f0) (0xc000732000) Create stream\nI0907 09:19:48.595148 3449 log.go:181] (0xc0007b33f0) (0xc000732000) Stream added, broadcasting: 3\nI0907 09:19:48.596090 3449 log.go:181] (0xc0007b33f0) Reply frame received for 3\nI0907 09:19:48.596131 3449 log.go:181] (0xc0007b33f0) (0xc000c2c0a0) Create stream\nI0907 09:19:48.596145 3449 log.go:181] (0xc0007b33f0) (0xc000c2c0a0) Stream added, broadcasting: 5\nI0907 09:19:48.596973 3449 log.go:181] (0xc0007b33f0) Reply frame received for 5\nI0907 09:19:48.649472 3449 log.go:181] (0xc0007b33f0) Data frame received for 5\nI0907 09:19:48.649518 3449 log.go:181] (0xc000c2c0a0) (5) Data frame handling\nI0907 09:19:48.649546 3449 log.go:181] (0xc000c2c0a0) (5) Data frame sent\nI0907 09:19:48.649560 3449 log.go:181] (0xc0007b33f0) Data frame received for 5\nI0907 09:19:48.649571 3449 log.go:181] (0xc000c2c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31440\nConnection to 172.18.0.15 31440 port [tcp/31440] succeeded!\nI0907 09:19:48.649615 3449 log.go:181] (0xc000c2c0a0) (5) Data frame sent\nI0907 09:19:48.649766 3449 log.go:181] (0xc0007b33f0) Data frame received for 3\nI0907 09:19:48.649806 3449 log.go:181] (0xc000732000) (3) Data frame handling\nI0907 09:19:48.649960 3449 log.go:181] (0xc0007b33f0) Data frame received for 5\nI0907 09:19:48.649986 3449 log.go:181] (0xc000c2c0a0) (5) Data frame handling\nI0907 09:19:48.651284 3449 log.go:181] (0xc0007b33f0) Data frame received for 1\nI0907 09:19:48.651307 3449 log.go:181] (0xc0007328c0) (1) Data frame handling\nI0907 09:19:48.651321 3449 log.go:181] (0xc0007328c0) (1) Data frame sent\nI0907 09:19:48.651429 3449 log.go:181] (0xc0007b33f0) (0xc0007328c0) Stream removed, broadcasting: 1\nI0907 09:19:48.651645 3449 log.go:181] (0xc0007b33f0) Go away received\nI0907 09:19:48.651769 3449 log.go:181] (0xc0007b33f0) (0xc0007328c0) Stream removed, broadcasting: 1\nI0907 09:19:48.651785 3449 log.go:181] (0xc0007b33f0) (0xc000732000) Stream removed, broadcasting: 3\nI0907 09:19:48.651794 3449 log.go:181] (0xc0007b33f0) (0xc000c2c0a0) Stream removed, broadcasting: 5\n" Sep 7 09:19:48.655: INFO: stdout: "" Sep 7 09:19:48.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-9505 execpod-affinityhk2tw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31440' Sep 7 09:19:48.873: INFO: stderr: "I0907 09:19:48.781544 3468 log.go:181] (0xc000568dc0) (0xc0000ef040) Create stream\nI0907 09:19:48.781600 3468 log.go:181] (0xc000568dc0) (0xc0000ef040) Stream added, broadcasting: 1\nI0907 09:19:48.787172 3468 log.go:181] (0xc000568dc0) Reply frame received for 1\nI0907 09:19:48.787215 3468 log.go:181] (0xc000568dc0) (0xc0000ef860) Create stream\nI0907 09:19:48.787229 3468 log.go:181] (0xc000568dc0) (0xc0000ef860) Stream added, broadcasting: 3\nI0907 09:19:48.788097 3468 log.go:181] (0xc000568dc0) Reply frame received for 3\nI0907 09:19:48.788149 3468 log.go:181] (0xc000568dc0) (0xc000bc3ea0) Create stream\nI0907 09:19:48.788166 3468 log.go:181] (0xc000568dc0) (0xc000bc3ea0) Stream added, broadcasting: 5\nI0907 09:19:48.789050 3468 log.go:181] (0xc000568dc0) Reply frame received for 5\nI0907 09:19:48.866872 3468 log.go:181] (0xc000568dc0) Data frame received for 5\nI0907 09:19:48.866919 3468 log.go:181] (0xc000bc3ea0) (5) Data frame handling\nI0907 09:19:48.866946 3468 log.go:181] (0xc000bc3ea0) (5) Data frame sent\nI0907 09:19:48.866966 3468 log.go:181] (0xc000568dc0) Data frame received for 5\nI0907 09:19:48.866984 3468 log.go:181] (0xc000bc3ea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31440\nConnection to 172.18.0.14 31440 port [tcp/31440] succeeded!\nI0907 09:19:48.867054 3468 log.go:181] (0xc000bc3ea0) (5) Data frame sent\nI0907 09:19:48.867075 3468 log.go:181] (0xc000568dc0) Data frame received for 5\nI0907 09:19:48.867086 3468 log.go:181] (0xc000bc3ea0) (5) Data frame handling\nI0907 09:19:48.867197 3468 log.go:181] (0xc000568dc0) Data frame received for 3\nI0907 09:19:48.867220 3468 log.go:181] (0xc0000ef860) (3) Data frame handling\nI0907 09:19:48.868987 3468 log.go:181] (0xc000568dc0) Data frame received for 1\nI0907 09:19:48.869019 3468 log.go:181] (0xc0000ef040) (1) Data frame handling\nI0907 09:19:48.869037 3468 log.go:181] (0xc0000ef040) (1) Data frame sent\nI0907 09:19:48.869057 3468 log.go:181] (0xc000568dc0) (0xc0000ef040) Stream removed, broadcasting: 1\nI0907 09:19:48.869085 3468 log.go:181] (0xc000568dc0) Go away received\nI0907 09:19:48.869472 3468 log.go:181] (0xc000568dc0) (0xc0000ef040) Stream removed, broadcasting: 1\nI0907 09:19:48.869496 3468 log.go:181] (0xc000568dc0) (0xc0000ef860) Stream removed, broadcasting: 3\nI0907 09:19:48.869506 3468 log.go:181] (0xc000568dc0) (0xc000bc3ea0) Stream removed, broadcasting: 5\n" Sep 7 09:19:48.873: INFO: stdout: "" Sep 7 09:19:48.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-9505 execpod-affinityhk2tw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31440/ ; done' Sep 7 09:19:49.222: INFO: stderr: "I0907 09:19:49.024352 3486 log.go:181] (0xc0001bd600) (0xc000c585a0) Create stream\nI0907 09:19:49.024443 3486 log.go:181] (0xc0001bd600) (0xc000c585a0) Stream added, broadcasting: 1\nI0907 09:19:49.030107 3486 log.go:181] (0xc0001bd600) Reply frame received for 1\nI0907 09:19:49.030165 3486 log.go:181] (0xc0001bd600) (0xc0004e8000) Create stream\nI0907 09:19:49.030183 3486 log.go:181] (0xc0001bd600) (0xc0004e8000) Stream added, broadcasting: 3\nI0907 09:19:49.031281 3486 log.go:181] (0xc0001bd600) Reply frame received for 3\nI0907 09:19:49.031336 3486 log.go:181] (0xc0001bd600) (0xc000c58000) Create stream\nI0907 09:19:49.031352 3486 log.go:181] (0xc0001bd600) (0xc000c58000) Stream added, broadcasting: 5\nI0907 09:19:49.032387 3486 log.go:181] (0xc0001bd600) Reply frame received for 5\nI0907 09:19:49.113858 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.113899 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.113914 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.113927 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.113967 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.113992 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.118796 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.118835 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.118860 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.119242 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.119263 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.119275 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.119298 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.119329 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.119365 3486 log.go:181] (0xc000c58000) (5) Data frame sent\nI0907 09:19:49.119392 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.119406 3486 log.go:181] (0xc000c58000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.119430 3486 log.go:181] (0xc000c58000) (5) Data frame sent\nI0907 09:19:49.126619 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.126654 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.126688 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.127207 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.127245 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.127261 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.127285 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.127299 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.127312 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.134649 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.134693 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.134723 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.134963 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.134992 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.135000 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.135009 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.135016 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.135022 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.140918 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.140950 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.140974 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.141756 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.141773 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.141785 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.141798 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.141805 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.141811 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.149960 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.149989 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.150020 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.150451 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.150478 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.150490 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.150498 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.150505 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.150523 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.154255 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.154277 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.154299 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.155066 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.155082 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.155093 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.155149 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.155179 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.155200 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.159549 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.159571 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.159590 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.160326 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.160352 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.160390 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -sI0907 09:19:49.160504 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.160520 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.160539 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.160668 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.160690 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.160706 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.164429 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.164443 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.164454 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.164806 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.164819 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.164832 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.164857 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.164867 3486 log.go:181] (0xc000c58000) (5) Data frame sent\nI0907 09:19:49.164876 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.164884 3486 log.go:181] (0xc000c58000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.164904 3486 log.go:181] (0xc000c58000) (5) Data frame sent\nI0907 09:19:49.164913 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.169652 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.169677 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.169692 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.170456 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.170478 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.170490 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.170510 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.170527 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.170539 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.177996 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.178015 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.178026 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.178652 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.178693 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.178713 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.178760 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.178781 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.178806 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.185312 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.185333 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.185351 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.185973 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.186003 3486 log.go:181] (0xc000c58000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.186029 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.186055 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.186070 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.186085 3486 log.go:181] (0xc000c58000) (5) Data frame sent\nI0907 09:19:49.191501 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.191518 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.191532 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.192521 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.192540 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.192553 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.192567 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.192576 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.192630 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.196767 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.196797 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.196831 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.197241 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.197264 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.197306 3486 log.go:181] (0xc000c58000) (5) Data frame sent\nI0907 09:19:49.197324 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.197337 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.197349 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.203120 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.203143 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.203154 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.203702 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.203823 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.203849 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.203885 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.203904 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.203941 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.208791 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.208809 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.208824 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.209313 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.209350 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.209374 3486 log.go:181] (0xc000c58000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.209409 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.209439 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.209465 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.213374 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.213398 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.213418 3486 log.go:181] (0xc0004e8000) (3) Data frame sent\nI0907 09:19:49.214267 3486 log.go:181] (0xc0001bd600) Data frame received for 5\nI0907 09:19:49.214294 3486 log.go:181] (0xc000c58000) (5) Data frame handling\nI0907 09:19:49.214316 3486 log.go:181] (0xc0001bd600) Data frame received for 3\nI0907 09:19:49.214341 3486 log.go:181] (0xc0004e8000) (3) Data frame handling\nI0907 09:19:49.216347 3486 log.go:181] (0xc0001bd600) Data frame received for 1\nI0907 09:19:49.216362 3486 log.go:181] (0xc000c585a0) (1) Data frame handling\nI0907 09:19:49.216369 3486 log.go:181] (0xc000c585a0) (1) Data frame sent\nI0907 09:19:49.216380 3486 log.go:181] (0xc0001bd600) (0xc000c585a0) Stream removed, broadcasting: 1\nI0907 09:19:49.216388 3486 log.go:181] (0xc0001bd600) Go away received\nI0907 09:19:49.216922 3486 log.go:181] (0xc0001bd600) (0xc000c585a0) Stream removed, broadcasting: 1\nI0907 09:19:49.216946 3486 log.go:181] (0xc0001bd600) (0xc0004e8000) Stream removed, broadcasting: 3\nI0907 09:19:49.216959 3486 log.go:181] (0xc0001bd600) (0xc000c58000) Stream removed, broadcasting: 5\n" Sep 7 09:19:49.222: INFO: stdout: "\naffinity-nodeport-transition-gv529\naffinity-nodeport-transition-8m6cr\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-8m6cr\naffinity-nodeport-transition-8m6cr\naffinity-nodeport-transition-8m6cr\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-gv529\naffinity-nodeport-transition-8m6cr\naffinity-nodeport-transition-gv529\naffinity-nodeport-transition-8m6cr\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-gv529" Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-gv529 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-8m6cr Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-8m6cr Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-8m6cr Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-8m6cr Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-gv529 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-8m6cr Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-gv529 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-8m6cr Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.222: INFO: Received response from host: affinity-nodeport-transition-gv529 Sep 7 09:19:49.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43335 --kubeconfig=/root/.kube/config exec --namespace=services-9505 execpod-affinityhk2tw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31440/ ; done' Sep 7 09:19:49.556: INFO: stderr: "I0907 09:19:49.367735 3505 log.go:181] (0xc0001b2370) (0xc000b20dc0) Create stream\nI0907 09:19:49.367797 3505 log.go:181] (0xc0001b2370) (0xc000b20dc0) Stream added, broadcasting: 1\nI0907 09:19:49.369456 3505 log.go:181] (0xc0001b2370) Reply frame received for 1\nI0907 09:19:49.369486 3505 log.go:181] (0xc0001b2370) (0xc00018a000) Create stream\nI0907 09:19:49.369502 3505 log.go:181] (0xc0001b2370) (0xc00018a000) Stream added, broadcasting: 3\nI0907 09:19:49.370430 3505 log.go:181] (0xc0001b2370) Reply frame received for 3\nI0907 09:19:49.370479 3505 log.go:181] (0xc0001b2370) (0xc00018a0a0) Create stream\nI0907 09:19:49.370507 3505 log.go:181] (0xc0001b2370) (0xc00018a0a0) Stream added, broadcasting: 5\nI0907 09:19:49.371469 3505 log.go:181] (0xc0001b2370) Reply frame received for 5\nI0907 09:19:49.447309 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.447332 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.447339 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.447387 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.447406 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.447421 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.451605 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.451623 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.451640 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.452562 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.452580 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.452595 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.452602 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.452611 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.452616 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.458832 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.458859 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.458886 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.459349 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.459372 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.459384 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.459396 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.459403 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.459410 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.463370 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.463393 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.463412 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.463973 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.463991 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.464000 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.464084 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.464092 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.464100 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.470602 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.470629 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.470649 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.471253 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.471276 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.471298 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.471306 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.471318 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.471325 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.476476 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.476498 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.476517 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.477196 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.477213 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.477223 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.477242 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.477262 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.477275 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.482507 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.482540 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.482567 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.482902 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.482919 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.482940 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.482975 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.482990 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.483016 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.489510 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.489544 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.489569 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.490285 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.490304 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.490321 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.490352 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.490365 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.490387 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.494522 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.494550 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.494572 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.495372 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.495402 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.495431 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.495444 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.495463 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.495474 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.500555 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.500587 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.500600 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.501296 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.501332 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.501348 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.501363 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.501381 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.501411 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.506529 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.506567 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.506596 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.507537 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.507550 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.507559 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.507584 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.507615 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.507649 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.514145 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.514174 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.514189 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.514998 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.515050 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.515078 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.515131 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.515172 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.515203 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -qI0907 09:19:49.515219 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.515272 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.515298 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.522432 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.522458 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.522471 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.524151 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.524192 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.524205 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.524226 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.524247 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.524279 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.529651 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.529676 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.529716 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.530281 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.530304 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.530317 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.530330 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.530338 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.530347 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.537018 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.537042 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.537061 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.537944 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.537959 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.537967 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.537978 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.537988 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.537995 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.542459 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.542471 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.542482 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.542813 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.542839 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.542848 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.542858 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.542863 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.542869 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\nI0907 09:19:49.542875 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.542879 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31440/\nI0907 09:19:49.542893 3505 log.go:181] (0xc00018a0a0) (5) Data frame sent\nI0907 09:19:49.548668 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.548697 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.548727 3505 log.go:181] (0xc00018a000) (3) Data frame sent\nI0907 09:19:49.549432 3505 log.go:181] (0xc0001b2370) Data frame received for 5\nI0907 09:19:49.549470 3505 log.go:181] (0xc00018a0a0) (5) Data frame handling\nI0907 09:19:49.549587 3505 log.go:181] (0xc0001b2370) Data frame received for 3\nI0907 09:19:49.549611 3505 log.go:181] (0xc00018a000) (3) Data frame handling\nI0907 09:19:49.551619 3505 log.go:181] (0xc0001b2370) Data frame received for 1\nI0907 09:19:49.551684 3505 log.go:181] (0xc000b20dc0) (1) Data frame handling\nI0907 09:19:49.551721 3505 log.go:181] (0xc000b20dc0) (1) Data frame sent\nI0907 09:19:49.551741 3505 log.go:181] (0xc0001b2370) (0xc000b20dc0) Stream removed, broadcasting: 1\nI0907 09:19:49.551772 3505 log.go:181] (0xc0001b2370) Go away received\nI0907 09:19:49.552351 3505 log.go:181] (0xc0001b2370) (0xc000b20dc0) Stream removed, broadcasting: 1\nI0907 09:19:49.552367 3505 log.go:181] (0xc0001b2370) (0xc00018a000) Stream removed, broadcasting: 3\nI0907 09:19:49.552374 3505 log.go:181] (0xc0001b2370) (0xc00018a0a0) Stream removed, broadcasting: 5\n" Sep 7 09:19:49.557: INFO: stdout: "\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9\naffinity-nodeport-transition-pwsv9" Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Received response from host: affinity-nodeport-transition-pwsv9 Sep 7 09:19:49.557: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9505, will wait for the garbage collector to delete the pods Sep 7 09:19:49.709: INFO: Deleting ReplicationController affinity-nodeport-transition took: 56.620035ms Sep 7 09:19:50.209: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.252602ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:01.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9505" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:28.685 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":296,"skipped":4636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:02.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 7 09:20:02.105: INFO: Major version: 1 STEP: Confirm minor version Sep 7 09:20:02.105: INFO: cleanMinorVersion: 19 Sep 7 09:20:02.105: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:02.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7481" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":297,"skipped":4695,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:02.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 7 09:20:02.173: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7" in namespace "projected-8360" to be "Succeeded or Failed" Sep 7 09:20:02.192: INFO: Pod "downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.585338ms Sep 7 09:20:04.196: INFO: Pod "downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022295174s Sep 7 09:20:06.397: INFO: Pod "downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.223251523s STEP: Saw pod success Sep 7 09:20:06.397: INFO: Pod "downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7" satisfied condition "Succeeded or Failed" Sep 7 09:20:06.401: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7 container client-container: STEP: delete the pod Sep 7 09:20:06.535: INFO: Waiting for pod downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7 to disappear Sep 7 09:20:06.541: INFO: Pod downwardapi-volume-57aedab5-0600-48a5-826b-f9b7e4ac7bc7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:06.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8360" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":298,"skipped":4695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:06.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:06.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9976" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":299,"skipped":4812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:06.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-86917c18-763d-49e2-8b1f-9ce0a52fc21d STEP: Creating a pod to test consume configMaps Sep 7 09:20:06.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba" in namespace "configmap-1774" to be "Succeeded or Failed" Sep 7 09:20:06.889: INFO: Pod "pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba": Phase="Pending", Reason="", readiness=false. Elapsed: 29.022782ms Sep 7 09:20:08.928: INFO: Pod "pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06841541s Sep 7 09:20:10.994: INFO: Pod "pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133989639s STEP: Saw pod success Sep 7 09:20:10.994: INFO: Pod "pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba" satisfied condition "Succeeded or Failed" Sep 7 09:20:10.997: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba container configmap-volume-test: STEP: delete the pod Sep 7 09:20:11.145: INFO: Waiting for pod pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba to disappear Sep 7 09:20:11.157: INFO: Pod pod-configmaps-639e58f4-5c70-4cb6-afc0-869711b821ba no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:11.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1774" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:11.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:11.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5018" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":301,"skipped":4905,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:11.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 7 09:20:11.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9788 /api/v1/namespaces/watch-9788/configmaps/e2e-watch-test-watch-closed 894ca51d-c766-4f89-b790-abd192e43109 302738 0 2020-09-07 09:20:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-07 09:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 09:20:11.352: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9788 /api/v1/namespaces/watch-9788/configmaps/e2e-watch-test-watch-closed 894ca51d-c766-4f89-b790-abd192e43109 302739 0 2020-09-07 09:20:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-07 09:20:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 7 09:20:11.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9788 /api/v1/namespaces/watch-9788/configmaps/e2e-watch-test-watch-closed 894ca51d-c766-4f89-b790-abd192e43109 302740 0 2020-09-07 09:20:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-07 09:20:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 7 09:20:11.403: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9788 /api/v1/namespaces/watch-9788/configmaps/e2e-watch-test-watch-closed 894ca51d-c766-4f89-b790-abd192e43109 302741 0 2020-09-07 09:20:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-07 09:20:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:11.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9788" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":302,"skipped":4907,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 7 09:20:11.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ac495d3a-814c-41b5-b04c-d9532d71a7eb STEP: Creating a pod to test consume secrets Sep 7 09:20:11.689: INFO: Waiting up to 5m0s for pod "pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d" in namespace "secrets-4424" to be "Succeeded or Failed" Sep 7 09:20:11.692: INFO: Pod "pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.999089ms Sep 7 09:20:13.696: INFO: Pod "pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007277497s Sep 7 09:20:15.701: INFO: Pod "pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012020685s STEP: Saw pod success Sep 7 09:20:15.701: INFO: Pod "pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d" satisfied condition "Succeeded or Failed" Sep 7 09:20:15.704: INFO: Trying to get logs from node latest-worker pod pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d container secret-volume-test: STEP: delete the pod Sep 7 09:20:15.774: INFO: Waiting for pod pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d to disappear Sep 7 09:20:15.781: INFO: Pod pod-secrets-ac9bc15a-4b04-4889-92b2-7f7bfabdc21d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.4.197+594f888e19d8da/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 7 09:20:15.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4424" for this suite. STEP: Destroying namespace "secret-namespace-7506" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":303,"skipped":4911,"failed":0} SSSSSSSSSSSSSSSSSSSep 7 09:20:15.794: INFO: Running AfterSuite actions on all nodes Sep 7 09:20:15.794: INFO: Running AfterSuite actions on node 1 Sep 7 09:20:15.795: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 6778.427 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS